To manage the metadata of persistent relational entities, e.g. In a real production environment, we always have a communal standalone metadata store, Using this mode for experimental purposes only. 23:50:52.986 INFO operation.ExecuteStatement: Processing kentyao's query: RUNNING_STATE -> FINISHED_STATE, statement: show tables, time taken: 0.03 seconds But I got that all figured out now and I’ve written the quick starts for HDFS, Spark and Hive. #Download spark with hive how toBut it took me a while to understand how to use it and from where. So in the end it was a question of adding services from one docker-compose.yml to the other and all the necessary files. 23:50:52.972 INFO metastore.HiveMetaStore: 2: get_tables: db=default pat=* How the Hadoop-Spark-Hive docker-compose was built. 23:50:52.970 INFO dit: ugi=kentyao ip=unknown-ip-addr cmd=get_database: default 23:50:52.970 INFO metastore.HiveMetaStore: 2: get_database: default 23:50:52.968 INFO dit: ugi=kentyao ip=unknown-ip-addr cmd=get_database: default 23:50:52.968 INFO metastore.HiveMetaStore: 2: get_database: default 23:50:50.423 INFO operation.ExecuteStatement: Processing kentyao's query: RUNNING_STATE -> FINISHED_STATE, statement: show databases, time taken: 0.035 secondsĠ: jdbc:hive2://localhost:10009/> show tables Transaction isolation: TRANSACTION_REPEATABLE_READĠ: jdbc:hive2://localhost:10009/> show databases
0 Comments
Leave a Reply. |
AuthorDustin ArchivesCategories |