Use pyodbc with Cloudera Impala ODBC and Kerberos

Initially tried the python impyla package to connect to Cloudera Impala but ran into various errors and dependency issues. Also 2 of 3 queries would hang or give errors. So next tried pyodbc to connect to Impala. Linux System Requirements: The Cloudera ODBC Driver for Impala is recommended for Impala versions 2.8 through 3.3, and … Continue reading Use pyodbc with Cloudera Impala ODBC and Kerberos

Connect DBeaver SQL Tool to Cloudera Hive/Impala with Kerberos

DBeaver https://dbeaver.io/ is a a powerful free opensource SQL editor tool than can connect to 80+ different databases. The below procedures will enable DBeaver to connect to Cloudera Hive/Impala using kerberos. Initially tried to use the Cloudera JDBC connection but it kept giving kerberos error: [Cloudera]ImpalaJDBCDriver Error initialized or created transport for authentication: [Cloudera]ImpalaJDBCDriver Unable … Continue reading Connect DBeaver SQL Tool to Cloudera Hive/Impala with Kerberos

Run any ad-hoc SQL query in Power BI desktop

It is not documented clearly how to run any arbitrary SQL query in Power BI desktop. It is definitely possible to easily run any SQL query as below: First click on Edit Queries in the top ribbon and then go to Advanced Editor and type in the SQL query as given in the picture below. … Continue reading Run any ad-hoc SQL query in Power BI desktop

Connect Microsoft Power BI desktop to Cloudera Impala or Hive with Kerberos

Microsoft Power BI desktop is free and is able to successfully connect to a Cloudera Impala or hive database with Kerberos security enabled. The below blog only shows Impala driver but you can use same procedure with Hive driver also. The basic steps are: Install the MIT Kerberos client for Windows and make sure you … Continue reading Connect Microsoft Power BI desktop to Cloudera Impala or Hive with Kerberos

Use Pandas in Jupyter PySpark3 kernel to query Hive table

Following python code will read a Hive table and convert to Pandas dataframe so you can use Pandas to process the rows. NOTE: Be careful when copy/paste the below code the double quotes need to be retyped as they get changed and gives syntax error. -------------------------------------------------------------------------------------------------------------- import pandas as pd from pyspark import SparkConf, SparkContext … Continue reading Use Pandas in Jupyter PySpark3 kernel to query Hive table

Tableau Desktop connect to Cloudera Hadoop using Kerberos

Reference: http://website4everything.blogspot.com/2015/04/connecting-tableau-to-hive-server-2.html The basic steps to connect Tableau to Cloudera Hive or Impala with Kerberos authentication involves the following steps: [Note: Step 1,2,3,4 are not needed if your hadoop cluster uses Active Directory kerberos instead of MIT kerberos as the ticket is automatically generated by AD.] Download and Install the MIT Kerberos Client for WindowSet … Continue reading Tableau Desktop connect to Cloudera Hadoop using Kerberos

Run a Python program to access Hadoop webhdfs and Hive with Kerberos enabled

Following python code makes REST calls to a secure Kerberos enabled Hadoop cluster to use webhdfs REST api to get file data: You need to first run $ knit userid@REALM to authenticate and initiate the Kerberos ticket for the user.Make sure the python modules requests and requests_kerberos have been installed. Otherwise install it for example: … Continue reading Run a Python program to access Hadoop webhdfs and Hive with Kerberos enabled

Business Intelligence, ETL and Data Science tools

Free or Opensource BI / ETL tools: Talend = ETL tool, leader in Gartner Magic Quadrant Streamsets = ETL tool Apache Nifi = ETL tool Pentaho = desktop and server version BI/ETL tool HUE = Hadoop Analytics server, BI, Query tool KNIME = Data Science leader in Gartner Magic Quadrant 2017 desktop version Jupyter Notebook … Continue reading Business Intelligence, ETL and Data Science tools

Install Jupyterhub

Jupyterhub Prerequisites: Before installing JupyterHub, you will need: a Linux/Unix based system and will need over 10GB of free space Python 3.4 or greater. An understanding of using pip or conda for installing Python packages is helpful. Installation using conda: Check if Anaconda package is already installed: $ dpkg -l | grep conda $ rpm -ql conda        -- if using rhel/centos If Anaconda … Continue reading Install Jupyterhub