Someone asked this on Quora about how to learn & explore the field of Big Data Algorithms? Also, mentioned having some background in python already and wanted ideas to work on a good project so with that context, here is my reply:
There are two broad roles available in Data/Big-Data world:
- Engineering-oriented: Date engineers, Data Warehousing specialists, Big Data engineer, Business Intelligence engineer— all of these roles are focused on building that data pipeline using code/tools to get the data in some centralized location
- Business-oriented: Data Analyst, Data scientist — all of these roles involve using data (from those centralized sources) and helping business leaders make better decisions. *
*smaller companies (or startups) tend to have roles where small teams(or just one person) do it all so the distinction is not that apparent.
Now given your background in python and programming, you might be a great fit for “Data engineer” roles and I would recommend learning about Apache spark (since you can use python code) and start building data pipelines. As you work with a little bit more than you can learn about how to build and deploy end-to-end machine learning projects with python & Apache spark. If you acquire these skills and keep learning — then I am sure you will end up with a good project.
Hope that helped and good luck!
VIEW QUESTION ON QUORA
SQL Server 2014 was Announced in Teched’s Keynote!
So while the focus of the SQL server 2012 was around in-memory OLAP, the focus of this new release seems to be In-memory OLTP (Along with Cloud & Big Data)
Here’s the Blog Post: http://blogs.technet.com/b/dataplatforminsider/archive/2013/06/03/sql-server-2014-unlocking-real-time-insights.aspx (Also Thanks for the Picture!)
I was selected to a be a speaker at SQL Saturday Trinidad! And it was amazing because not only did I get a chance to interact with the wonderful people who are part of SQL Server community there but also visited some beautiful places on this Caribbean island!
I visited Trinidad in January, just before their carnival season! And even though, people were busy preparing for carnival season, it was great to see them attend an entire day of SQL Server Training:
And here’s me presenting on “Why Big Data Matters”:
(Thanks Niko for the photo!)
And after the event, I also got a chance to experience the beauty of this Caribbean island!
Thank you SQL Saturday 185 Team for a memorable time!
Presentation Slides: The slides had been posted for the attendees prior to my presentation and if you want you can view them here:
HDFS and MapReduce inner workings in a nutshell.
Click on the image to view larger sized image
In this post, I want to point out that HDInsight (Hadoop on Windows) comes with a sample datasets (log files) that you can load using the command:
1. Hadoop command Line > Navigate to c:HadoopGettingStarted
2. Execute the following command:
powershell -ExecutionPolicy unrestricted –F importdata.ps1 w3c
After you have successfully executed the command, you can sample files in /w3c/input folder:
Conclusion: In this post, we saw how to load some data to Hadoop on Windows file system to get started. Your comments are very welcome.
Official Resource: http://gettingstarted.hadooponazure.com/loadingData.html
This Blog post applies to Microsoft® HDInsight Preview for a windows machine. In this Blog Post, we’ll see how you can browse the HDFS (Hadoop Filesystem)?
1. I am assuming Hadoop Services are working without issues on your machine.
2. Now, Can you see the Hadoop Name Node Status Icon on your desktop? Yes? Great! Open it (via Browser)
3. Here’s what you’ll see:
4. Can you see the “Browse the filesystem” link? click on it. You’ll see:
5. I’ve used the /user/data lately, so Let me browse to see what’s inside this directory:
6. You can also type in the location in the check box that says Goto
7. If you’re on command line, you can do so via the command:
hadoop fs -ls /
And if you want to browse files inside a particular directory:
HDFS File System Shell Guide
In this post, we saw how to browse Hadoop File system via Hadoop Command Line & Hadoop Name Node Status