55

https://cwiki.apache.org/confluence/display/Hive/HiveClient#HiveClient-Python appears to be outdated.

When I add this to /etc/profile:

export PYTHONPATH=$PYTHONPATH:/usr/lib/hive/lib/py

I can then do the imports as listed in the link, with the exception of from hive import ThriftHive which actually need to be:

from hive_service import ThriftHive

Next the port in the example was 10000, which when I tried caused the program to hang. The default Hive Thrift port is 9083, which stopped the hanging.

So I set it up like so:

from thrift import Thrift
from thrift.transport import TSocket
from thrift.transport import TTransport
from thrift.protocol import TBinaryProtocol
try:
    transport = TSocket.TSocket('<node-with-metastore>', 9083)
    transport = TTransport.TBufferedTransport(transport)
    protocol = TBinaryProtocol.TBinaryProtocol(transport)
    client = ThriftHive.Client(protocol)
    transport.open()
    client.execute("CREATE TABLE test(c1 int)")

    transport.close()
except Thrift.TException, tx:
    print '%s' % (tx.message)

I received the following error:

Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/hive/lib/py/hive_service/ThriftHive.py", line 68, in execute
self.recv_execute()
File "/usr/lib/hive/lib/py/hive_service/ThriftHive.py", line 84, in recv_execute
raise x
thrift.Thrift.TApplicationException: Invalid method name: 'execute'

But inspecting the ThriftHive.py file reveals the method execute within the Client class.

How may I use Python to access Hive?

2

18 Answers 18

70

I believe the easiest way is to use PyHive.

To install you'll need these libraries:

pip install sasl
pip install thrift
pip install thrift-sasl
pip install PyHive

Please note that although you install the library as PyHive, you import the module as pyhive, all lower-case.

If you're on Linux, you may need to install SASL separately before running the above. Install the package libsasl2-dev using apt-get or yum or whatever package manager for your distribution. For Windows there are some options on GNU.org, you can download a binary installer. On a Mac SASL should be available if you've installed xcode developer tools (xcode-select --install in Terminal)

After installation, you can connect to Hive like this:

from pyhive import hive
conn = hive.Connection(host="YOUR_HIVE_HOST", port=PORT, username="YOU")

Now that you have the hive connection, you have options how to use it. You can just straight-up query:

cursor = conn.cursor()
cursor.execute("SELECT cool_stuff FROM hive_table")
for result in cursor.fetchall():
  use_result(result)

...or to use the connection to make a Pandas dataframe:

import pandas as pd
df = pd.read_sql("SELECT cool_stuff FROM hive_table", conn)
8
  • 1
    FYI, as of now sasl doesn't work with python 3. Other info here.
    – joefromct
    Dec 16, 2015 at 5:17
  • 2
    I experienced some troubles when connecting to HiveServer2 on Debian. Error was: "SASL authentication failure: No worthy mechs found". I had to install libsasl2-modules package (via apt-get) to get it works.
    – noleto
    Apr 12, 2016 at 13:26
  • 1
    You have said that "For Windows there are some options on GNU.org, you can download a binary installer". Can you provide the link? cos I still have an issue.. stackoverflow.com/questions/42210901/… Feb 14, 2017 at 17:48
  • 1
    When I tried to install sasl via Pip, I caught an error: sasl/saslwrapper.h:22:23: fatal error: sasl/sasl.h: No such file or directory. Installation libsasl2-dev via apt-get, installation works fine.
    – VeLKerr
    Jul 24, 2017 at 21:45
  • 1
    This was so helpful! Thank you
    – chris_aych
    Sep 10, 2020 at 18:28
29

I assert that you are using HiveServer2, which is the reason that makes the code doesn't work.

You may use pyhs2 to access your Hive correctly and the example code like that:

import pyhs2

with pyhs2.connect(host='localhost',
               port=10000,
               authMechanism="PLAIN",
               user='root',
               password='test',
               database='default') as conn:
    with conn.cursor() as cur:
        #Show databases
        print cur.getDatabases()

        #Execute query
        cur.execute("select * from table")

        #Return column info from query
        print cur.getSchema()

        #Fetch table results
        for i in cur.fetch():
            print i

Attention that you may install python-devel.x86_64 cyrus-sasl-devel.x86_64 before installing pyhs2 with pip.

Wish this can help you.

Reference: https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2#SettingUpHiveServer2-PythonClientDriver

9
  • 1
    Thanks for this, I was having issues installing pyhs2 with pip in CentOS 6, but your suggested YUM libraries did the trick. One quick comment, in your example code above, you need to indent everything below database='default') as conn: for it work correctly. There's some additional documentation here: github.com/BradRuderman/pyhs2
    – muro
    Oct 7, 2014 at 15:45
  • I am getting this error. I was wondering if you knew anything about this? github.com/BradRuderman/pyhs2/issues/32
    – Liondancer
    Dec 17, 2014 at 19:20
  • At the time I wrote this question, I was not on HiveServer2. However we just installed it yesterday, and I can confirm this answer works on HS2. Aug 26, 2015 at 20:59
  • that's awesome.. would this approach work for Hive Server 2 servers too? would this work with kerberized clusters? thanks.
    – Tagar
    Nov 16, 2015 at 23:55
  • How would you use kerberos authentication instead of plain?
    – TayTay
    Jan 14, 2016 at 16:35
14

Below python program should work to access hive tables from python:

import commands

cmd = "hive -S -e 'SELECT * FROM db_name.table_name LIMIT 1;' "

status, output = commands.getstatusoutput(cmd)

if status == 0:
   print output
else:
   print "error"
3
  • 5
    +1 might be good in some quick-and-dirty cases when you can't install external yum or pip packages on a server.
    – Tagar
    Jul 29, 2016 at 19:41
  • 2
    @python-starter , your method only works when the hive resides on the same server where python is installed. If you are accessing the hive tables on a remote server, I guess something else is required.
    – ML_Passion
    Jan 11, 2017 at 22:30
  • 1
    @ML_Passion could be just few more lines of bash code to deal with remote problem. however parse the result will be the real hell (esp. when try this with Asian language and weird dirty data from other team) Jun 27, 2017 at 17:02
7

To connect using a username/password and specifying ports, the code looks like this:

from pyhive import presto

cursor = presto.connect(host='host.example.com',
                    port=8081,
                    username='USERNAME:PASSWORD').cursor()

sql = 'select * from table limit 10'

cursor.execute(sql)

print(cursor.fetchone())
print(cursor.fetchall())
6

You can use hive library,for that you want to import hive Class from hive import ThriftHive

Try This example:

import sys

from hive import ThriftHive
from hive.ttypes import HiveServerException

from thrift import Thrift
from thrift.transport import TSocket
from thrift.transport import TTransport
from thrift.protocol import TBinaryProtocol

try:
  transport = TSocket.TSocket('localhost', 10000)
  transport = TTransport.TBufferedTransport(transport)
  protocol = TBinaryProtocol.TBinaryProtocol(transport)
  client = ThriftHive.Client(protocol)
  transport.open()
  client.execute("CREATE TABLE r(a STRING, b INT, c DOUBLE)")
  client.execute("LOAD TABLE LOCAL INPATH '/path' INTO TABLE r")
  client.execute("SELECT * FROM r")
  while (1):
    row = client.fetchOne()
    if (row == None):
       break
    print row

  client.execute("SELECT * FROM r")
  print client.fetchAll()
  transport.close()
except Thrift.TException, tx:
  print '%s' % (tx.message)
1
  • 1
    Hi Naveen. My apologies for the year+ response. When I try this example for one of my tables, it appears that the client.fetchOne() command is returning a string, not a row/array/dict object that is indexed by columns. Is this your impression as well? I would like to be able to access the individual columns. Regards. Feb 9, 2015 at 19:48
6

here's a generic approach which makes it easy for me because I keep connecting to several servers (SQL, Teradata, Hive etc.) from python. Hence, I use the pyodbc connector. Here's some basic steps to get going with pyodbc (in case you have never used it):

  • Pre-requisite: You should have the relevant ODBC connection in your windows setup before you follow the below steps. In case you don't have it, find the same here

Once complete: STEP 1. pip install: pip install pyodbc (here's the link to download the relevant driver from Microsoft's website)

STEP 2. now, import the same in your python script:

import pyodbc

STEP 3. Finally, go ahead and give the connection details as follows:

conn_hive = pyodbc.connect('DSN = YOUR_DSN_NAME , SERVER = YOUR_SERVER_NAME, UID = USER_ID, PWD = PSWD' )

The best part of using pyodbc is that I have to import just one package to connect to almost any data source.

0
6

It is a common practice to prohibit for a user to download and install packages and libraries on cluster nodes. In this case solutions of @python-starter and @goks are working perfect, if hive run on the same node. Otherwise, one can use a beeline instead of hive command line tool. See details

#python 2
import commands

cmd = 'beeline -u "jdbc:hive2://node07.foo.bar:10000/...<your connect string>" -e "SELECT * FROM db_name.table_name LIMIT 1;"'

status, output = commands.getstatusoutput(cmd)

if status == 0:
   print output
else:
   print "error"

.

#python 3
import subprocess

cmd = 'beeline -u "jdbc:hive2://node07.foo.bar:10000/...<your connect string>" -e "SELECT * FROM db_name.table_name LIMIT 1;"'

status, output = subprocess.getstatusoutput(cmd)

if status == 0:
   print(output)
else:
   print("error")
1
  • I am getting Error: Could not open client transport for any of the Server URI's in ZooKeeper: null (state=08S01,code=0) java.sql.SQLException: Could not open client transport for any of the Server URI's in ZooKeeper: nul
    – Vim
    Jul 5, 2023 at 13:01
4

The examples above are a bit out of date. One new example is here:

import pyhs2 as hive
import getpass
DEFAULT_DB = 'default'
DEFAULT_SERVER = '10.37.40.1'
DEFAULT_PORT = 10000
DEFAULT_DOMAIN = 'PAM01-PRD01.IBM.COM'

u = raw_input('Enter PAM username: ')
s = getpass.getpass()
connection = hive.connect(host=DEFAULT_SERVER, port= DEFAULT_PORT, authMechanism='LDAP', user=u + '@' + DEFAULT_DOMAIN, password=s)
statement = "select * from user_yuti.Temp_CredCard where pir_post_dt = '2014-05-01' limit 100"
cur = connection.cursor()

cur.execute(statement)
df = cur.fetchall() 

In addition to the standard python program, a few libraries need to be installed to allow Python to build the connection to the Hadoop databae.

1.Pyhs2, Python Hive Server 2 Client Driver

2.Sasl, Cyrus-SASL bindings for Python

3.Thrift, Python bindings for the Apache Thrift RPC system

4.PyHive, Python interface to Hive

Remember to change the permission of the executable

chmod +x test_hive2.py ./test_hive2.py

Wish it helps you. Reference: https://sites.google.com/site/tingyusz/home/blogs/hiveinpython

4

Similar to eycheu's solution, but a little more detailed.

Here is an alternative solution specifically for hive2 that does not require PyHive or installing system-wide packages. I am working on a linux environment that I do not have root access to so installing the SASL dependencies as mentioned in Tristin's post was not an option for me:

If you're on Linux, you may need to install SASL separately before running the above. Install the package libsasl2-dev using apt-get or yum or whatever package manager for your distribution.

Specifically, this solution focuses on leveraging the python package: JayDeBeApi. In my experience installing this one extra package on top of a python Anaconda 2.7 install was all I needed. This package leverages java (JDK). I am assuming that is already set up.

Step 1: Install JayDeBeApi

pip install jaydebeap

Step 2: Download appropriate drivers for your environment:

  • Here is a link to the jars required for an enterprise CDH environment
  • Another post that talks about where to find jdbc drivers for Apache Hive

Store all .jar files in a directory. I will refer to this directory as /path/to/jar/files/.

Step 3: Identify your systems authentication mechanism:

In the pyhive solutions listed I've seen PLAIN listed as the authentication mechanism as well as Kerberos. Note that your jdbc connection URL will depend on the authentication mechanism you are using. I will explain Kerberos solution without passing a username/password. Here is more information Kerberos authentication and options.

Create a Kerberos ticket if one is not already created

$ kinit

Tickets can be viewed via klist.

You are now ready to make the connection via python:

import jaydebeapi
import glob
# Creates a list of jar files in the /path/to/jar/files/ directory
jar_files = glob.glob('/path/to/jar/files/*.jar')

host='localhost'
port='10000'
database='default'

# note: your driver will depend on your environment and drivers you've
# downloaded in step 2
# this is the driver for my environment (jdbc3, hive2, cloudera enterprise)
driver='com.cloudera.hive.jdbc3.HS2Driver'

conn_hive = jaydebeapi.connect(driver,
        'jdbc:hive2://'+host+':' +port+'/'+database+';AuthMech=1;KrbHostFQDN='+host+';KrbServiceName=hive'
                           ,jars=jar_files)

If you only care about reading, then you can read it directly into a panda's dataframe with ease via eycheu's solution:

import pandas as pd
df = pd.read_sql("select * from table", conn_hive)

Otherwise, here is a more versatile communication option:

cursor = conn_hive.cursor()
sql_expression = "select * from table"
cursor.execute(sql_expression)
results = cursor.fetchall()

You could imagine, if you wanted to create a table, you would not need to "fetch" the results, but could submit a create table query instead.

1
  • It's a good solution. The only thing that worries me with the jaydebeapi approach is that database connectivity requires a jvm being spawned - which is a huge overhead. Also jaydebeapi is still marked as "beta" on pip registry and hasn't been updated for last couple of years.
    – Tagar
    Jan 15, 2019 at 5:23
3

similar to @python-starter solution. But, commands package is not avilable on python3.x. So Alternative solution is to use subprocess in python3.x

import subprocess

cmd = "hive -S -e 'SELECT * FROM db_name.table_name LIMIT 1;' "

status, output = subprocess.getstatusoutput(cmd)

if status == 0:
   print(output)
else:
   print("error")
2
  • I am using python 3 (virtual environment) though I am getting this error AttributeError: module 'subprocess' has no attribute 'getstatusoutput' Jul 2, 2018 at 11:03
  • @SandeepSingh Check if you have any python module with name 'subprocess' in your working directory
    – goks
    Jul 11, 2018 at 17:15
3

pyhs2 is no longer maintained. A better alternative is impyla

Don't be confused that some of the above examples below about Impala; just change port to 10000 (default) for HiveServer2, and it'll work the same way as with Impala examples. It's the same protocol (Thrift) that is used for both Impala and Hive.

https://github.com/cloudera/impyla

It has many more features over pyhs2, for example, it has Kerberos authentication, which is a must for us.

from impala.dbapi import connect
conn = connect(host='my.host.com', port=10000)
cursor = conn.cursor()
cursor.execute('SELECT * FROM mytable LIMIT 100')
print cursor.description  # prints the result set's schema
results = cursor.fetchall()

##
cursor.execute('SELECT * FROM mytable LIMIT 100')
for row in cursor:
    process(row)

Cloudera is putting more effort now on hs2 client https://github.com/cloudera/hs2client which is a C/C++ HiveServer2/Impala client. Might be a better option if you push a lot of data to/from python. (has Python binding too - https://github.com/cloudera/hs2client/tree/master/python )

Some more information on impyla:

4
  • 1
    I have a strange problem. If I give "select * from <table> limit 10" its working fine. But if i give "select * from <table> where hostname='<host>'" it is failing. It says raise NotSupportedError("Hive does not have transactions") # pragma: no cover Any idea why this is happening? I am able to execute the same query on Ambari. Jun 18, 2018 at 14:03
  • I get the same kind error. No idea where to start looking.
    – Carsten
    Oct 31, 2019 at 14:44
  • Impyla's primary role was an interface for Impala, and it doesn't support transactions, hence NotImplemented errors when you try to commit/rollback etc. Filed github.com/cloudera/impyla/issues/372 improvement request
    – Tagar
    Nov 8, 2019 at 18:29
  • This doesn't work. Only work for impala May 9, 2023 at 7:55
3

Simplest method | Using sqlalchemy

Requirements:

  • pip install pyhive

Code:

import pandas as pd
from sqlalchemy import create_engine

SECRET = {'username':'lol', 'password': 'lol'}
user_name = SECRET.get('username')
passwd = SECRET.get('password')

host_server = 'x.x.x.x'
port = '10000'
database = 'default'
conn = f'hive://{user_name}:{passwd}@{host_server}:{port}/{database}'
engine = create_engine(conn, connect_args={'auth': 'LDAP'})

query = "select * from tablename limit 100"
data = pd.read_sql(query, con=engine)
print(data)
2

This can be a quick hack to connect hive and python,

from pyhive import hive
cursor = hive.connect('YOUR_HOST_NAME').cursor()
cursor.execute('SELECT * from table_name LIMIT 5',async=True)
print cursor.fetchall()

Output: List of Tuples

0
2

You could use python JayDeBeApi package to create DB-API connection from Hive or Impala JDBC driver and then pass the connection to pandas.read_sql function to return data in pandas dataframe.

import jaydebeapi
# Apparently need to load the jar files for the first time for impala jdbc driver to work 
conn = jaydebeapi.connect('com.cloudera.hive.jdbc41.HS2Driver',
['jdbc:hive2://host:10000/db;AuthMech=1;KrbHostFQDN=xxx.com;KrbServiceName=hive;KrbRealm=xxx.COM', "",""],
jars=['/hadp/opt/jdbc/hive_jdbc_2.5.18.1050/2.5.18.1050 GA/Cloudera_HiveJDBC41_2.5.18.1050/HiveJDBC41.jar',
'/hadp/opt/jdbc/hive_jdbc_2.5.18.1050/2.5.18.1050 GA/Cloudera_HiveJDBC41_2.5.18.1050/TCLIServiceClient.jar',
'/hadp/opt/jdbc/hive_jdbc_2.5.18.1050/2.5.18.1050 GA/Cloudera_HiveJDBC41_2.5.18.1050/commons-codec-1.3.jar',
'/hadp/opt/jdbc/hive_jdbc_2.5.18.1050/2.5.18.1050 GA/Cloudera_HiveJDBC41_2.5.18.1050/commons-logging-1.1.1.jar',
'/hadp/opt/jdbc/hive_jdbc_2.5.18.1050/2.5.18.1050 GA/Cloudera_HiveJDBC41_2.5.18.1050/hive_metastore.jar',
'/hadp/opt/jdbc/hive_jdbc_2.5.18.1050/2.5.18.1050 GA/Cloudera_HiveJDBC41_2.5.18.1050/hive_service.jar',
'/hadp/opt/jdbc/hive_jdbc_2.5.18.1050/2.5.18.1050 GA/Cloudera_HiveJDBC41_2.5.18.1050/httpclient-4.1.3.jar',
'/hadp/opt/jdbc/hive_jdbc_2.5.18.1050/2.5.18.1050 GA/Cloudera_HiveJDBC41_2.5.18.1050/httpcore-4.1.3.jar',
'/hadp/opt/jdbc/hive_jdbc_2.5.18.1050/2.5.18.1050 GA/Cloudera_HiveJDBC41_2.5.18.1050/libfb303-0.9.0.jar',
'/hadp/opt/jdbc/hive_jdbc_2.5.18.1050/2.5.18.1050 GA/Cloudera_HiveJDBC41_2.5.18.1050/libthrift-0.9.0.jar',
'/hadp/opt/jdbc/hive_jdbc_2.5.18.1050/2.5.18.1050 GA/Cloudera_HiveJDBC41_2.5.18.1050/log4j-1.2.14.jar',
'/hadp/opt/jdbc/hive_jdbc_2.5.18.1050/2.5.18.1050 GA/Cloudera_HiveJDBC41_2.5.18.1050/ql.jar',
'/hadp/opt/jdbc/hive_jdbc_2.5.18.1050/2.5.18.1050 GA/Cloudera_HiveJDBC41_2.5.18.1050/slf4j-api-1.5.11.jar',
'/hadp/opt/jdbc/hive_jdbc_2.5.18.1050/2.5.18.1050 GA/Cloudera_HiveJDBC41_2.5.18.1050/slf4j-log4j12-1.5.11.jar',
'/hadp/opt/jdbc/hive_jdbc_2.5.18.1050/2.5.18.1050 GA/Cloudera_HiveJDBC41_2.5.18.1050/zookeeper-3.4.6.jar',
'/hadp/opt/jdbc/impala_jdbc_2.5.35/2.5.35.1055 GA/Cloudera_ImpalaJDBC41_2.5.35/ImpalaJDBC41.jar',
'/hadp/opt/jdbc/impala_jdbc_2.5.35/2.5.35.1055 GA/Cloudera_ImpalaJDBC41_2.5.35/TCLIServiceClient.jar',
'/hadp/opt/jdbc/impala_jdbc_2.5.35/2.5.35.1055 GA/Cloudera_ImpalaJDBC41_2.5.35/commons-codec-1.3.jar',
'/hadp/opt/jdbc/impala_jdbc_2.5.35/2.5.35.1055 GA/Cloudera_ImpalaJDBC41_2.5.35/commons-logging-1.1.1.jar',
'/hadp/opt/jdbc/impala_jdbc_2.5.35/2.5.35.1055 GA/Cloudera_ImpalaJDBC41_2.5.35/hive_metastore.jar',
'/hadp/opt/jdbc/impala_jdbc_2.5.35/2.5.35.1055 GA/Cloudera_ImpalaJDBC41_2.5.35/hive_service.jar',
'/hadp/opt/jdbc/impala_jdbc_2.5.35/2.5.35.1055 GA/Cloudera_ImpalaJDBC41_2.5.35/httpclient-4.1.3.jar',
'/hadp/opt/jdbc/impala_jdbc_2.5.35/2.5.35.1055 GA/Cloudera_ImpalaJDBC41_2.5.35/httpcore-4.1.3.jar',
'/hadp/opt/jdbc/impala_jdbc_2.5.35/2.5.35.1055 GA/Cloudera_ImpalaJDBC41_2.5.35/libfb303-0.9.0.jar',
'/hadp/opt/jdbc/impala_jdbc_2.5.35/2.5.35.1055 GA/Cloudera_ImpalaJDBC41_2.5.35/libthrift-0.9.0.jar',
'/hadp/opt/jdbc/impala_jdbc_2.5.35/2.5.35.1055 GA/Cloudera_ImpalaJDBC41_2.5.35/log4j-1.2.14.jar',
'/hadp/opt/jdbc/impala_jdbc_2.5.35/2.5.35.1055 GA/Cloudera_ImpalaJDBC41_2.5.35/ql.jar',
'/hadp/opt/jdbc/impala_jdbc_2.5.35/2.5.35.1055 GA/Cloudera_ImpalaJDBC41_2.5.35/slf4j-api-1.5.11.jar',
'/hadp/opt/jdbc/impala_jdbc_2.5.35/2.5.35.1055 GA/Cloudera_ImpalaJDBC41_2.5.35/slf4j-log4j12-1.5.11.jar',
'/hadp/opt/jdbc/impala_jdbc_2.5.35/2.5.35.1055 GA/Cloudera_ImpalaJDBC41_2.5.35/zookeeper-3.4.6.jar'
])

# the previous call have initialized the jar files, technically this call needs not include the required jar files
impala_conn = jaydebeapi.connect('com.cloudera.impala.jdbc41.Driver',
['jdbc:impala://host:21050/db;AuthMech=1;KrbHostFQDN=xxx.com;KrbServiceName=impala;KrbRealm=xxx.COM',"",""],
jars=['/hadp/opt/jdbc/hive_jdbc_2.5.18.1050/2.5.18.1050 GA/Cloudera_HiveJDBC41_2.5.18.1050/HiveJDBC41.jar',
'/hadp/opt/jdbc/hive_jdbc_2.5.18.1050/2.5.18.1050 GA/Cloudera_HiveJDBC41_2.5.18.1050/TCLIServiceClient.jar',
'/hadp/opt/jdbc/hive_jdbc_2.5.18.1050/2.5.18.1050 GA/Cloudera_HiveJDBC41_2.5.18.1050/commons-codec-1.3.jar',
'/hadp/opt/jdbc/hive_jdbc_2.5.18.1050/2.5.18.1050 GA/Cloudera_HiveJDBC41_2.5.18.1050/commons-logging-1.1.1.jar',
'/hadp/opt/jdbc/hive_jdbc_2.5.18.1050/2.5.18.1050 GA/Cloudera_HiveJDBC41_2.5.18.1050/hive_metastore.jar',
'/hadp/opt/jdbc/hive_jdbc_2.5.18.1050/2.5.18.1050 GA/Cloudera_HiveJDBC41_2.5.18.1050/hive_service.jar',
'/hadp/opt/jdbc/hive_jdbc_2.5.18.1050/2.5.18.1050 GA/Cloudera_HiveJDBC41_2.5.18.1050/httpclient-4.1.3.jar',
'/hadp/opt/jdbc/hive_jdbc_2.5.18.1050/2.5.18.1050 GA/Cloudera_HiveJDBC41_2.5.18.1050/httpcore-4.1.3.jar',
'/hadp/opt/jdbc/hive_jdbc_2.5.18.1050/2.5.18.1050 GA/Cloudera_HiveJDBC41_2.5.18.1050/libfb303-0.9.0.jar',
'/hadp/opt/jdbc/hive_jdbc_2.5.18.1050/2.5.18.1050 GA/Cloudera_HiveJDBC41_2.5.18.1050/libthrift-0.9.0.jar',
'/hadp/opt/jdbc/hive_jdbc_2.5.18.1050/2.5.18.1050 GA/Cloudera_HiveJDBC41_2.5.18.1050/log4j-1.2.14.jar',
'/hadp/opt/jdbc/hive_jdbc_2.5.18.1050/2.5.18.1050 GA/Cloudera_HiveJDBC41_2.5.18.1050/ql.jar',
'/hadp/opt/jdbc/hive_jdbc_2.5.18.1050/2.5.18.1050 GA/Cloudera_HiveJDBC41_2.5.18.1050/slf4j-api-1.5.11.jar',
'/hadp/opt/jdbc/hive_jdbc_2.5.18.1050/2.5.18.1050 GA/Cloudera_HiveJDBC41_2.5.18.1050/slf4j-log4j12-1.5.11.jar',
'/hadp/opt/jdbc/hive_jdbc_2.5.18.1050/2.5.18.1050 GA/Cloudera_HiveJDBC41_2.5.18.1050/zookeeper-3.4.6.jar',
'/hadp/opt/jdbc/impala_jdbc_2.5.35/2.5.35.1055 GA/Cloudera_ImpalaJDBC41_2.5.35/ImpalaJDBC41.jar',
'/hadp/opt/jdbc/impala_jdbc_2.5.35/2.5.35.1055 GA/Cloudera_ImpalaJDBC41_2.5.35/TCLIServiceClient.jar',
'/hadp/opt/jdbc/impala_jdbc_2.5.35/2.5.35.1055 GA/Cloudera_ImpalaJDBC41_2.5.35/commons-codec-1.3.jar',
'/hadp/opt/jdbc/impala_jdbc_2.5.35/2.5.35.1055 GA/Cloudera_ImpalaJDBC41_2.5.35/commons-logging-1.1.1.jar',
'/hadp/opt/jdbc/impala_jdbc_2.5.35/2.5.35.1055 GA/Cloudera_ImpalaJDBC41_2.5.35/hive_metastore.jar',
'/hadp/opt/jdbc/impala_jdbc_2.5.35/2.5.35.1055 GA/Cloudera_ImpalaJDBC41_2.5.35/hive_service.jar',
'/hadp/opt/jdbc/impala_jdbc_2.5.35/2.5.35.1055 GA/Cloudera_ImpalaJDBC41_2.5.35/httpclient-4.1.3.jar',
'/hadp/opt/jdbc/impala_jdbc_2.5.35/2.5.35.1055 GA/Cloudera_ImpalaJDBC41_2.5.35/httpcore-4.1.3.jar',
'/hadp/opt/jdbc/impala_jdbc_2.5.35/2.5.35.1055 GA/Cloudera_ImpalaJDBC41_2.5.35/libfb303-0.9.0.jar',
'/hadp/opt/jdbc/impala_jdbc_2.5.35/2.5.35.1055 GA/Cloudera_ImpalaJDBC41_2.5.35/libthrift-0.9.0.jar',
'/hadp/opt/jdbc/impala_jdbc_2.5.35/2.5.35.1055 GA/Cloudera_ImpalaJDBC41_2.5.35/log4j-1.2.14.jar',
'/hadp/opt/jdbc/impala_jdbc_2.5.35/2.5.35.1055 GA/Cloudera_ImpalaJDBC41_2.5.35/ql.jar',
'/hadp/opt/jdbc/impala_jdbc_2.5.35/2.5.35.1055 GA/Cloudera_ImpalaJDBC41_2.5.35/slf4j-api-1.5.11.jar',
'/hadp/opt/jdbc/impala_jdbc_2.5.35/2.5.35.1055 GA/Cloudera_ImpalaJDBC41_2.5.35/slf4j-log4j12-1.5.11.jar',
'/hadp/opt/jdbc/impala_jdbc_2.5.35/2.5.35.1055 GA/Cloudera_ImpalaJDBC41_2.5.35/zookeeper-3.4.6.jar'
])

import pandas as pd
df1 = pd.read_sql("SELECT * FROM tablename", conn)
df2 = pd.read_sql("SELECT * FROM tablename", impala_conn)

conn.close()
impala_conn.close()
0
2

The easiest way is to use PyHive.

To install you'll need these libraries:

pip install sasl
pip install thrift
pip install thrift-sasl
pip install PyHive

After installation, you can connect to Hive like this:

from pyhive import hive
conn = hive.Connection(host="YOUR_HIVE_HOST", port=PORT, username="YOU")

Now that you have the hive connection, you have options how to use it. You can just straight-up query:

cursor = conn.cursor()
cursor.execute("SELECT cool_stuff FROM hive_table")
for result in cursor.fetchall():
  use_result(result)

...or to use the connection to make a Pandas dataframe:

import pandas as pd
df = pd.read_sql("SELECT cool_stuff FROM hive_table", conn)
1
1

None of the answers demonstrate how to fetch and print the table headers. Modified the standard example from PyHive which is widely used and actively maintained.

from pyhive import hive
cursor = hive.connect(host="localhost", 
                      port=10000, 
                      username="shadan", 
                      auth="KERBEROS", 
                      kerberos_service_name="hive"
                      ).cursor()
cursor.execute("SELECT * FROM my_dummy_table LIMIT 10")
columnList = [desc[0] for desc in cursor.description]
headerStr = ",".join(columnList)
headerTuple = tuple(headerStr.split (",")
print(headerTuple)
print(cursor.fetchone())
print(cursor.fetchall())
0

I have solved the same problem with you,here is my operation environment( System:linux Versions:python 3.6 Package:Pyhive) please refer to my answer as follows:

from pyhive import hive
conn = hive.Connection(host='149.129.***.**', port=10000, username='*', database='*',password="*",auth='LDAP')

The key point is to add the reference password & auth and meanwhile set the auth equal to 'LDAP' . Then it works well, any questions please let me know

0

By using Python Client Driver

pip install pyhs2

Then

import pyhs2

with pyhs2.connect(host='localhost',
               port=10000,
               authMechanism="PLAIN",
               user='root',
               password='test',
               database='default') as conn:
with conn.cursor() as cur:
    #Show databases
    print cur.getDatabases()

    #Execute query
    cur.execute("select * from table")

    #Return column info from query
    print cur.getSchema()

    #Fetch table results
    for i in cur.fetch():
        print i

Refer : https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2#SettingUpHiveServer2-PythonClientDriver

Not the answer you're looking for? Browse other questions tagged or ask your own question.