SQL Server 2014 was Announced in Teched’s Keynote!
So while the focus of the SQL server 2012 was around in-memory OLAP, the focus of this new release seems to be In-memory OLTP (Along with Cloud & Big Data)
Here’s the Blog Post: http://blogs.technet.com/b/dataplatforminsider/archive/2013/06/03/sql-server-2014-unlocking-real-time-insights.aspx (Also Thanks for the Picture!)
I was selected to a be a speaker at SQL Saturday Trinidad! And it was amazing because not only did I get a chance to interact with the wonderful people who are part of SQL Server community there but also visited some beautiful places on this Caribbean island!
I visited Trinidad in January, just before their carnival season! And even though, people were busy preparing for carnival season, it was great to see them attend an entire day of SQL Server Training:
And here’s me presenting on “Why Big Data Matters”:
(Thanks Niko for the photo!)
And after the event, I also got a chance to experience the beauty of this Caribbean island!
Thank you SQL Saturday 185 Team for a memorable time!
Presentation Slides: The slides had been posted for the attendees prior to my presentation and if you want you can view them here:
HDFS and MapReduce inner workings in a nutshell.
Click on the image to view larger sized image
In this post, I want to point out that HDInsight (Hadoop on Windows) comes with a sample datasets (log files) that you can load using the command:
1. Hadoop command Line > Navigate to c:HadoopGettingStarted
2. Execute the following command:
powershell -ExecutionPolicy unrestricted –F importdata.ps1 w3c
After you have successfully executed the command, you can sample files in /w3c/input folder:
Conclusion: In this post, we saw how to load some data to Hadoop on Windows file system to get started. Your comments are very welcome.
Official Resource: http://gettingstarted.hadooponazure.com/loadingData.html
This Blog post applies to Microsoft® HDInsight Preview for a windows machine. In this Blog Post, we’ll see how you can browse the HDFS (Hadoop Filesystem)?
1. I am assuming Hadoop Services are working without issues on your machine.
2. Now, Can you see the Hadoop Name Node Status Icon on your desktop? Yes? Great! Open it (via Browser)
3. Here’s what you’ll see:
4. Can you see the “Browse the filesystem” link? click on it. You’ll see:
5. I’ve used the /user/data lately, so Let me browse to see what’s inside this directory:
6. You can also type in the location in the check box that says Goto
7. If you’re on command line, you can do so via the command:
hadoop fs -ls /
And if you want to browse files inside a particular directory:
HDFS File System Shell Guide
In this post, we saw how to browse Hadoop File system via Hadoop Command Line & Hadoop Name Node Status
Problem Statement: Find Maximum Temperature for a city from the Input data.
Step 1) Input Files:
Step 2: Map Function
Let’s say Map1, Map2 & Map3 run on File1, File2 & File3 in parallel, Here is their output:
(Note how it outputs the “Key – Value” pair. The key would be used by the reduce function later to do a “group by“)
Step 3: Reduce Function
Reduce Function takes the input from Map1, Map2 & Map3, to give an output:
In this post, we visualized MapReduce Programming Model with an example: Finding Max Temp. for a city. And as you can imagine you can extend this post, to visualize:
1) Find Minimum Temperature for a city.
2) In this post, the key was City, But you could substitute it by other relevant real world entity to solve similar looking problems.
I hope this helps.
Visualizing MapReduce Algorithm with WordCount Example