Anyone who wants to pick up Spark basics - Berkeley (Spark was developed at Berkeley's AMPLab) in collaboration with DataBricks (Commercial company started by Spark creators) just started a free MOOC on edx: https://www.edx.org/course/introduction-big-data-apache-spar...
(If you wonder what is Spark, in a very unofficial nutshell - it is a computation / big data / analytics / machine learning / graph processing engine on top of Hadoop that usually performs much better and has arguably a much easier API in Python, Scala, Java and now R)
It has more than 5000 students so far, and the Professor seems to answer every single Piazza question (a popular student / teacher message board).
So far it looks really good (It started a week ago, so you can still catch up, 2nd lab is due only Friday 6/12 EOD, but you have 3 days "grace" period... and there is not too much to catch up)
I use Spark for work (Scala API) and still learned one or two new things.
It uses the PySpark API so no need to learn Scala. All homework labs are done in a iPython notebook. Very high quality so far IMHO.
It is followed by a more advanced spark course (Scalable Machine Learning) also by Berkeley & Databricks.
I would love to learn about spark,but as some one who li e in third world country I hate edx,instead I am in love with udacity and coursera.the place I am living ,we don't have much traffic monthly ,instead we can download everything we want between 1am-6am,so there is no way to download course from edx ,simply and using it later.I wish it was on udacitg or coursera,is there any torrent for course material?
I'm doing the spark course. Edx has a download button on the videos, and can download PDF files for the lectures. The rest like quizes that are embeded, i just screenshot or save as pdf for posterity.
Are you sure you can't download, or maybe they've changed recently.
Yes I am aware of download button , but consider every course is ~50 distict video and also consider our downloading time you are going to agree with me about downloading is extermely painful ,why they just doesn't put whole material (at least just videos) like the way udacidy does.
Hadoop isn't require and it only run better if you fit data in memory.
Spark does micro batch processing where as Hadoop traditionally does batch processing. Hadoop yarns is different now and even with old Hadoop if you can fit it into memory it can be supposely as fast according to a meetup I've attended.
I've been struggling to set up it correctly on my debian machine. Are there debian packages or some concise tutorial? I've found some thing on the web, but certain things does not much mine and I'm lost...
Apache Spark is a general purpose distributed data processing and caching engine. It is an evolution of MapReduce concepts into more general "directed acylic graph" processing, which is very flexible for defining and executing data processing work on a distributed cluster. It's got some similarities to PrestoDb, Apache Drill and or Apache Storm (although not quite the same).
It also has some nice data mining libraries, a library for handling streaming data, some connectivity to external data sources and a library for accessing data stored in its generic "data frames" via SQL. "Data frames" are just an abstraction for a dataset, but they are distributed, and in-memory and/or persistent.
Personally, I like to think of as an engine for data analysis/processing and queries, but different in that it is not really a "database" like you would traditionally consider. It's almost like if you took the SQL data processing engine out of your database and made it really flexible.
Edit: Also, all the functionality of Apache Spark is programmatically accessible in Java, Scala and Python, or through SQL with their Hive/thrift interface.
I'm excited about SparkR, even though R is shunned in the field of big data. Between that and dplyr (which inspired the SparkR syntax) for data manipulation and sanitation, it should be much easier to write sane, reproducible code and visualizations for big data analysis. (the Python/Scala tutorials for Spark gave me a headache)
I too have been mystified by R's licensing. I actually don't see how anyone can ship a commercial product using R in its current form. At very least you're in a legal gray area, at worst you are involuntarily open sourcing your product. Not that there's anything wrong with open sourcing a product, but I think there's an enormous potential issue that could foul up a lot of people down the track. The best discussion I have seen about this pretty much ends up with uninformed speculation. For now, I take the policy of "explore and prototype in R, build the real system in something else". Fortunately the flaws and limitations of R as a language make this a sensible choice for a host of other reasons as well.
My interpretation is that it will convert DataFrames to normal data.frames when necessary. Unfortunately, this removes the performance efficiency of Spark.
Since currently SparkR only supports aggregation, it limits the usability of SparkR slightly. Future versions will apparently have MLib support which should alleviate that.
Does anyone know if there's a guide to integrating Spark between a realtime write only database and a historical database?
I've looked into using Spark Streaming, but I can't work out how you could seamlessly transition data from a streaming batch to the historical db in a reasonably tight time period.
I'd be willing to pay for training if it came to it, but I don't think I'm using the right search terms.
Check out MemSQL's Community Edition for this very use case. We shipped MemSQL with an open-source multi-threaded, bi-directional connector to Spark.
The DB has two storage engines: in-memory row tables, and on-disk column tables for efficient compression and permanent retention. Then, it becomes an easy task of INSERT/SELECT...FROM to move data from memory to disk very quickly.
May I ask, why do you want to integrate Spark in the middle of the two? I am seeing Spark used more for distributed processing/caching data rather than being a conduit for data movement from one system to another.
You have a realtime write only database and you want to update a historical database from that write only database?
Or do you just want to join data across the two sources on the fly? Those are two pretty different use cases.
Based on what you're asking, you might find these two articles interesting:
Well, maybe I'm totally off on this, but it's more that I'd like to be able to run analytics which include real-time data without having any notable pauses. I'm willing to look at anything in terms of getting the data from the real time capture into the historical database as long as the spark queries "just work".
Sorry, I think maybe "integrating between" was the wrong way to phrase it.
On the other hand, I mean there's clean up and preprocessing I want to do on data that goes into the historical dataset, so hey, why not do that clean up/processing with Spark?
I've seen Lambda Architecture before, but it seems like it's kinda gone dark and unless I just totally overlooked it, I don't think there was a "Hey, this is the way to do it guys!"
Not sure if you have used it but Spark is exceptionally good at data movement.
In fact that is what a lot of people initially started using it for (as a replacement for Hive/Pig). You can write SQL against HCatalog tables, do some transformation work then write the results out to a different system. We have hundreds of jobs that do just this.
Well, I guess that is the power of it being so general purpose. I have used Spark more for analytics (and Spark SQL) but not extensively for ETL. What you're saying makes sense, you're still using Spark as an execution/computation engine, just writing the plumbing code to use it like an intermediary ETL tool.
While Spark is not intended for ETL per se, when I need to copy data from s3 to HDFS, I just use sc.textFile and sc.saveAsTextFile, in most of my use cases it does it pretty fast.
But Spark is mostly a computation engine replacing MapReduce (plus a standalone cluster management option). not an ETL tool.
I know it's cool to bash MongoDB but it is really nicely integrated with Spark, extremely quick for a write workloads (3000 writes/second on our slow drives and that's inside the RDD map) and doesn't flinch even when getting it to write 1 billion rows in quick succession. One thing that is really nice is that being schemaless you don't have to worry about setting up tables structures beforehand.
You can look at Cassandra which is historically known for exceptional write performance.
I, somehow, always keep getting confused between Spark and Storm! Can someone explain the difference between the two (usecases etc.) as if explaining to a five year-old? Thanks!
(If you wonder what is Spark, in a very unofficial nutshell - it is a computation / big data / analytics / machine learning / graph processing engine on top of Hadoop that usually performs much better and has arguably a much easier API in Python, Scala, Java and now R)
It has more than 5000 students so far, and the Professor seems to answer every single Piazza question (a popular student / teacher message board).
So far it looks really good (It started a week ago, so you can still catch up, 2nd lab is due only Friday 6/12 EOD, but you have 3 days "grace" period... and there is not too much to catch up)
I use Spark for work (Scala API) and still learned one or two new things.
It uses the PySpark API so no need to learn Scala. All homework labs are done in a iPython notebook. Very high quality so far IMHO.
It is followed by a more advanced spark course (Scalable Machine Learning) also by Berkeley & Databricks.
https://www.edx.org/course/scalable-machine-learning-uc-berk...
(not affiliated with edx, Berkeley or databricks, just thought it's a good place for a PSA to those interested)
The Spark originating academic paper by Matei Zaharia (Creator of Spark) got him a PHd dissertation award in 2014 by the ACM (http://www.acm.org/press-room/news-releases/2015/dissertatio...)
Spark also set a new record in large scale sorting (Beating Hadoop by far): https://databricks.com/blog/2014/11/05/spark-officially-sets...
* EDIT: typo in "Berkeley", thanks gboss for noticing :)