This Blog post applies to Microsoft® HDInsight Preview for a windows machine. In this Blog Post, we’ll see how you can browse the HDFS (Hadoop Filesystem)?
1. I am assuming Hadoop Services are working without issues on your machine.
2. Now, Can you see the Hadoop Name Node Status Icon on your desktop? Yes? Great! Open it (via Browser)
3. Here’s what you’ll see:
4. Can you see the “Browse the filesystem” link? click on it. You’ll see:
5. I’ve used the /user/data lately, so Let me browse to see what’s inside this directory:
6. You can also type in the location in the check box that says Goto
7. If you’re on command line, you can do so via the command:
hadoop fs -ls /
And if you want to browse files inside a particular directory:
2. Make sure that the Cluster is up & running! To check this, I click on the “Microsoft HDInsight Dashboard” or open http://localhost:8085/ on my machine
Did you get any “wait for cluster to start..” message? No? Great! Hopefully, all your services are working perfectly and you are good to go now!
3. Before we begin, decide on three things:
3a: Username and Password that Sqoop would use to login to the SQL Server database. If you create a new username and pasword, test it via SSMS before you proceed.
3b. select the table that you want to load into HDFS
In my case, it’s this table:
3c: The target directory in HDFS. in my case I want it to be /user/data/sqoopstudent1
You can create by command: hadoop fs -mkdir /user/data/sqoopstudent1
1. Upload Twitter Text Data into Hadoop on Azure cluster
2. Create a Hive Table and load the data uploaded in step 1 to the Hive Table
3. Analyze data in Hive via Excel Add-in
Before we begin, I assume you have access to Hadoop on azure, Have your sample data (don’t have one? learn from a blog post), familiar with Hadoop ecosystem and know your way around the Hadoop on Azure Dashboard.
Now, Here are the steps involved:
STEP 1: Upload Twitter Text Data into Hadoop on Azure cluster
1. Have your data to be uploaded ready! I am just going to Copy Paste the File from my host machine to the RDP’ed machine. In this case, the machine that I am going is the Hadoop on Azure cluster.
For the purpose of this blog post, I have a text file having 1500 tweets:
2. Open web browser > Go to your cluster in Hadoop on Azure
3. RDP into your Hadoop on Azure cluster
4. Copy-Paste the File. It’s a small data file so this approach works for now.
Step 2: Create a Hive Table and load the data uploaded in step 1 to the Hive Table
1. Stay on the machine that you Remote Desktop (RDP’ed) into.
2. Open the Hadoop command line (you’ll see a icon on your Desktop)
3. switch to Hive:
4. Use the following Hive Commands:
DROP TABLE IF EXISTS TweetSampleTable;
CREATE TABLE TweetSampleTable ( id string, text string, favorited string, replyToSN string, created string, truncated string, replyToSID string, replyToUID string, statusSource string, screenName string );
LOAD DATA LOCAL INPATH ‘C:appsdistexamplesdatatweets.txt’ OVERWRITE INTO TABLE TweetSampleTable;
Note that for the purpose of this blog-post, I’ve chose string as data type for all fields. This is something that depends on the data that you have. If I were building a solution, I would spend some more time choosing the right data type.
Step 3. Analyze data in Hive via Excel Add-in
1. Switch to Hadoop on Azure Dashboard
2. Go to the Hive Console and run the show tables to verify that there is a tweetsampletable.
3. Now if you haven’t, Download and Install the Hive ODBC Driver from the Downloads section of your Hadoop on Azure Dashboard.