Wednesday 5 June 2019

Hotel Link Solutions Sales Training. WHAT, HOW, WHY WHY, HOW, WHAT. - ppt download

Hotel Link Solutions Sales Training. WHAT, HOW, WHY WHY, HOW, WHAT. - ppt download: THE “WHY” “In everything we do, we strive to make our solutions simple to use.” “We are your partners, we make things possible. If you don’t earn, we don’t earn as well”

Thursday 27 February 2014

The WhatsApp Architecture Facebook Bought For $19 Billion

Source: http://highscalability.com/blog/2014/2/26/the-whatsapp-architecture-facebook-bought-for-19-billion.html

Rick Reed in an upcoming talk in March titled That's 'Billion' with a 'B': Scaling to the next level at WhatsApp reveals some eye popping WhatsApp stats:
What has hundreds of nodes, thousands of cores, hundreds of terabytes of RAM, and hopes to serve the billions of smartphones that will soon be a reality around the globe? The Erlang/FreeBSD-based server infrastructure at WhatsApp. We've faced many challenges in meeting the ever-growing demand for our messaging services, but as we continue to push the envelope on size (>8000 cores) and speed (>70M Erlang messages per second) of our serving system.
But since we don't have that talk yet, let's take a look at a talk Rick Reed gave two years ago on WhatsApp: Scaling to Millions of Simultaneous Connections.
Having built a high performance messaging bus in C++ while at Yahoo, Rick Reed is not new to the world of high scalability architectures. The founders are also ex-Yahoo guys with not a little experience scaling systems. So WhatsApp comes by their scaling prowess honestly. And since they have a Big Hairy Audacious of Goal of being on every smartphone in the world, which could be as many as 5 billion phones in a few years, they'll need to make the most of that experience.
Before we get to the facts, let's digress for a moment on this absolutely fascinating conundrum: How can WhatsApp possibly be worth $19 billion to Facebook?
As a programmer if you ask me if WhatsApp is worth that much I'll answer expletive no! It's just sending stuff over a network. Get real. But I'm also the guy that thought we don't need blogging platforms because how hard is it to remote login to your own server, edit the index.html file with vi, then write your post in HTML? It has taken quite a while for me to realize it's not the code stupid, it's getting all those users to love and use your product that is the hard part. You can't buy love
What is it that makes WhatsApp so valuable? The technology? Ignore all those people who say they could write WhatsApp in a week with PHP. That's simply not true. It is as we'll see pretty cool technology. But certainly Facebook has sufficient chops to build WhatsApp if they wished.
Let's look at features. We know WhatsApp is a no gimmicks (no ads, no gimmicks, no games) product with loyal users from across the world. It offers free texting in a cruel world where SMS charges can be abusive. As a sheltered American it has surprised me the most to see how many real people use WhatsApp to really stay in touch with family and friends. So when you get on WhatsApp it's likely people you know are already on it, since everyone has a phone, which mitigates the empty social network problem. It is aggressively cross platform so everyone you know can use it and it will just work. It "just works" is a phrase often used. It is full featured (shared locations, video, audio, pictures, push-to-talk, voice-messages and photos, read receipt, group-chats, send messages via WiFi, and all can be done regardless of whether the recipient is online or not). It handles the display of native languages well. And using your cell number as identity and your contacts list as a social graph is diabolically simple. There's no email verification, username and password, and no credit card number required. So it just works.
All impressive, but that can't be worth $19 billion. Other products can compete on features.
Google wanted it is a possible reason. It's a threat. It's for the .99 cents a user. Facebook is just desperate. It's for your phone book. It's for the meta-data (even though WhatsApp keeps none).
It's for the 450 million active users, with a user based growing at one million users a day, with a potential for a billion users. Facebook needs WhatApp for its next billion users. Certainly that must be part if it. And a cost of about $40 a user doesn't seem unreasonable, especially with the bulk paid out in stock.  Facebook acquired Instagram for about $30 per user. A Twitter user is worth $110.
Benedict Evans makes a great case that Mobile is a 1+ trillion dollar business, WhatsApp is disrupting the SMS part of this industry, which globally has over $100 billion in revenue, by sending 18 billion SMS messages a day when the global SMS system only sends 20 billion SMS messages a day.  With a fundamental change in the transition from PCs to nearly universal smartphone adoption, the size of the opportunity is a much larger addressable market than where Facebook normally plays.
But Facebook has promised no ads and no interference, so where's the win?
There's the interesting development of business use over mobile. WhatsApp is used to create group conversations for project teams and venture capitalists carry out deal flow conversations over WhatsApp.
Instagram is used in Kuwait to sell sheep.
WeChat, a WhatsApp competitor, launched a taxi-cab hailing service in January. In the first month 21 million cabs were hailed.
With the future of e-commerce looking like it will be funneled through mobile messaging apps, it must be an e-commerce play?
It's not just businesses using WhatsApp for applications that were once on the desktop or on the web. Police officers in Spain use WhatsApp to catch criminals. People in Italy use it to organize basketball games.
Commerce and other applications are jumping on to mobile for obvious reasons. Everyone has mobile and these messaging applications are powerful, free, and cheap to use. No longer do you need a desktop or a web application to get things done. A lot of functionality can be overlayed on a messaging app.
So messaging is a threat to Google and Facebook. The desktop is dead. The web is dying. Messaging + mobile is an entire ecosystem that sidesteps their channel.
Facebook needs to get into this market or become irrelevant?
With the move to mobile we are seeing deportalization of Facebook. The desktop web interface for Facebook is a portal style interface providing access to all the features made available by the backend. It's big, complicated, and creaky. Who really loves the Facebook UI?
When Facebook moved to mobile they tried the portal approach and it didn't work. So they are going with a strategy of smaller, more focussed, purpose built apps. Mobile first! There's only so much you can do on a small screen. On mobile it's easier to go find a special app than it is to find a menu buried deep within a complicated portal style application.
But Facebook is going one step further. They are not only creating purpose built apps, they are providing multiple competing apps that provide similar functionality and these apps may not even share a backend infrastructure. We see this with Messenger and WhatsApp, Instagram and Facebook's photo app. Paper is an alternate interface to Facebook that provides very limited functionality, but it does what it does very well.
Conway's law may be operating here. The idea that "organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations." With a monolithic backend infrastructure we get a Borg-like portal design. The move to mobile frees the organization from this way of thinking. If apps can be built that provide a view of just a slice of the Facebook infrastructure then apps can be built that don't use Facebook's infrastructure at all. And if they don't need Facebook's infrastructure then they are free not to be built by Facebook at all. So exactly what is Facebook then?
Facebook CEO Mark Zuckerberg has his own take, saying in a keynote presentation at the Mobile World Congress that Facebook's acquisition of WhatsApp was closely related to the Internet.org vision:
The idea is to develop a group of basic internet services that would be free of charge to use — "a 911 for the internet." These could be a social networking service like Facebook, a messaging service, maybe search and other things like weather. Providing a bundle of these free of charge to users will work like a gateway drug of sorts — users who may be able to afford data services and phones these days just don't see the point of why they would pay for those data services. This would give them some context for why they are important, and that will lead them to paying for more services like this — or so the hope goes.
This is the long play, which is a game that having a huge reservoir of valuable stock allows you to play.
Have we reached a conclusion? I don't think so. It's such a stunning dollar amount with such tenuous apparent immediate rewards, that the long term play explanation actually does make some sense. We are still in the very early days of mobile. Nobody knows what the future will look like, so it pays not try to force the future to look like your past. Facebook seems to be doing just that.
But enough of this. How do you support 450 million active users with only 32 engineers? Let's find out...

Sources

A warning here, we don't know a lot about the WhatsApp over all architecture. Just bits and pieces gathered from various sources. Rick Reed's main talk is about the optimization process used to get to 2 million connections a server while using Erlang, which is interesting, but it's not a complete architecture talk.

Stats

These stats are generally for the current system, not the system we have a talk on. The talk on the current system will include more on hacks for data storage, messaging, meta-clustering, and more BEAM/OTP patches.
  • 450 million active users, and reached that number faster than any other company in history.
  • 32 engineers, one developer supports 14 million active users
  • 50 billion messages every day across seven platforms (inbound + outbound)
  • 1+ million people sign up every day
  • $0 invested in advertising
  • $8 million investment
  • Hundreds of nodes
  • >8000 cores
  • Hundreds of terabytes of RAM
  • >70M Erlang messages per second
  • In 2011 WhatsApp achieved 1 million established tcp sessions on a single machine with memory and cpu to spare. In 2012 that was pushed to over 2 million tcp connections. In 2013 WhatsApp tweeted out: On Dec 31st we had a new record day: 7B msgs inbound, 11B msgs outbound = 18 billion total messages processed in one day! Happy 2013!!!

Platform

Backend

  • Erlang
  • FreeBSD
  • Yaws, lighttpd
  • Custom patches to BEAM (BEAM is like Java's JVM, but for Erlang)
  • Custom XMPP

Frontend

  • Seven client platforms: iPhone, Android, Blackberry, Nokia Symbian S60, Nokia S40, Windows Phone, ?
  • SQLite

Hardware

  • Standard user facing server:
    • Dual Westmere Hex-core (24 logical CPUs);
    • 100GB RAM, SSD;
    • Dual NIC (public user-facing network, private back-end/distribution);

Product

  • Focus is on messaging. Connecting people all over the world, regardless of where they are in the world, without having to pay a lot of money. Founder Jan Koum remembers how difficult it was in 1992 to connect to family all over the world.
  • Privacy. Shaped by Jan Koum's experiences growing up in the Ukraine, where nothing was private. Messages are not stored on servers; chat history is not stored; goal is to know as little about users as possible; your name and your gender are not known; chat history is only on your phone.

General

  • WhatsApp server is almost completely implemented in Erlang.
    • Server systems that do the backend message routing are done in Erlang.
    • Great achievement is that the number of active users is managed with a really small server footprint. Team consensus is that it is largely because of Erlang.
    • Interesting to note Facebook Chat was written in Erlang in 2009, but they went away from it because it was hard to find qualified programmers.
  • WhatsApp server has started from ejabberd
    • Ejabberd is a famous open source Jabber server written in Erlang.
    • Originally chosen because its open, had great reviews by developers, ease of start and the promise of Erlang's long term suitability for large communication system.
    • The next few years were spent re-writing and modifying quite a few parts of ejabberd, including switching from XMPP to internally developed protocol, restructuring the code base and redesigning some core components, and making lots of important modifications to Erlang VM to optimize server performance.
  • To handle 50 billion messages a day the focus is on making a reliable system that works. Monetization is something to look at later, it's far far down the road.
  • A primary gauge of system health is message queue length. The message queue length of all the processes on a node is constantly monitored and an alert is sent out if they accumulate backlog beyond a preset threshold. If one or more processes falls behind that is alerted on, which gives a pointer to the next bottleneck to attack.
  • Multimedia messages are sent by uploading the image, audio or video to be sent to an HTTP server and then sending a link to the content along with its Base64 encoded thumbnail (if applicable).
  • Some code is usually pushed every day. Often, it's multiple times a day, though in general peak traffic times are avoided. Erlang helps being aggressive in getting fixes and features into production. Hot-loading means updates can be pushed without restarts or traffic shifting. Mistakes can usually be undone very quickly, again by hot-loading. Systems tend to be much more loosely-coupled which makes it very easy to roll changes out incrementally.
  • What protocol is used in Whatsapp app? SSL socket to the WhatsApp server pools. All messages are queued on the server until the client reconnects to retrieve the messages. The successful retrieval of a message is sent back to the whatsapp server which forwards this status back to the original sender (which will see that as a "checkmark" icon next to the message). Messages are wiped from the server memory as soon as the client has accepted the message
  • How does the registration process work internally in Whatsapp? WhatsApp used to create a username/password based on the phone IMEI number. This was changed recently. WhatsApp now uses a general request from the app to send a unique 5 digit PIN. WhatsApp will then send a SMS to the indicated phone number (this means the WhatsApp client no longer needs to run on the same phone). Based on the pin number the app then request a unique key from WhatsApp. This key is used as "password" for all future calls. (this "permanent" key is stored on the device). This also means that registering a new device will invalidate the key on the old device.
  • Google's push service is used on Android.
  • More users on Android. Android is more enjoyable to work with. Developers are able to prototype a feature and push it out to hundreds of millions of users overnight, if there's an issue it can be fixed quickly. iOS, not so much.

The Quest for 2+ Million Connections Per Server

  • Experienced lots of user growth, which is a good problem to have, but it also means having to spend money buying more hardware and increased operational complexity of managing all those machines.
  • Need to plan for bumps in traffic. Examples are soccer games and earthquakes in Spain or Mexico. These happen near peak traffic loads, so there needs to be enough spare capacity to handle peaks + bumps. A recent soccer match generated a 35% spike in outbound message rate right at the daily peak.
  • Initial server loading was 200 simultaneous connections per server.
    • Extrapolated out would mean a lot of servers with the hoped for growth pattern.
    • Servers were brittle in the face of burst loads. Network glitches and other problems would occur. Needed to decouple components so things weren't so brittle at high capacity.
    • Goal was a million connections per server. An ambitious goal given at the time they were running at 200K connections. Running servers with headroom to allow for world events, hardware failures, and other types of glitches would require enough resilience to handle the high usage levels and failures.

Tools and Techniques Used to Increase Scalability

  • Wrote system activity reporter tool (wsar):
    • Record system stats across the system, including OS stats, hardware stats, BEAM stats. It was build so it was easy to plugin metrics from other systems, like virtual memory. Track CPU utilization, overall utilization, user time, system time, interrupt time, context switches, system calls, traps, packets sent/received, total count of messages in queues across all processes, busy port events, traffic rate, bytes in/out, scheduling stats, garbage collection stats, words collected, etc.
    • Initially ran once a minute. As the systems were driven harder one second polling resolution was required because events that happened in the space if a minute were invisible. Really fine grained stats to see how everything is performing.
  • Hardware performance counters in CPU (pmcstat):
    • See where the CPU is at as a percentage of time. Can tell how much time is being spent executing the emulator loop. In their case it is 16% which tells them that only 16% is executing emulated code so even if you were able to remove all the execution time of all the Erlang code it would only save 16% out of the total runtime. This implies you should focus in other areas to improve efficiency of the system.
  • dtrace, kernel lock-counting, fprof
    • Dtrace was mostly for debugging, not performance.
    • Patched BEAM on FreeBSD to include CPU time stamp.
    • Wrote scripts to create an aggregated view of across all processes to see where routines are spending all the  time.
    • Biggest win was compiling the emulator with lock counting turned on.
  • Some Issues:
    • Earlier on saw more time spent in the garbage collections routines, that was brought down.
    • Saw some issues with the networking stack that was tuned away.
    • Most issues were with lock contention in the emulator which shows strongly in the output of the lock counting.
  • Measurement:
    • Synthetic workloads, which means generating traffic from your own test scripts, is of little value for tuning user facing systems at extreme scale.
      • Worked well for simple interfaces like a user table, generating inserts and reads as quickly as possible.
      • If supporting a million connections on a server it would take 30 hosts to open enough IP ports to generate enough connections to test just one server. For two million servers that would take 60 hosts. Just difficult to generate that kind of scale.
      • The type of traffic that is seen during production is difficult to generate. Can guess at a normal workload, but in actuality see networking events, world events, since multi-platform see varying behaviour between clients, and varying by country.
    • Tee'd workload:
      • Take normal production traffic and pipe it off to a separate system.
      • Very useful for systems for which side effects could be constrained. Don't want to tee traffic and do things that would affect the permanent state of a user or result in multiple messages going to users.
      • Erlang supports hot loading, so could be under a full production load, have an idea, compile, load the change as the program is running and instantly see if that change is better or worse.
      • Added knobs to change production load dynamically and see how it would affect performance. Would be tailing the sar output looking at things like CPU usage, VM utilization, listen queue overflows, and turn knobs to see how the system reacted.
    • True production loads:
      • Ultimate test. Doing both input work and output work.
      • Put server in DNS a couple of times so it would get double or triple the normal traffic. Creates issues with TTLs because clients don't respect DNS TTLs and there's a delay, so can't quickly react to getting more traffic than can be dealt with.
      • IPFW. Forward traffic from one server to another so could give a host exactly the number of desired client connections. A bug caused a kernel panic so that didn't work very well.
  • Results:
    • Started at 200K simultaneous connections per server.
    • First bottleneck showed up at 425K. System ran into a lot of contention. Work stopped. Instrumented the scheduler to measure how much useful work is being done, or sleeping, or spinning. Under load it started to hit sleeping locks so 35-45% CPU was being used across the system but the schedulers are at 95% utilization.
    • The first round of fixes got to over a million connections.
      • VM usage is at 76%. CPU is at 73%. BEAM emulator running at 45% utilization, which matches closely to user percentage, which is good because the emulator runs as user.
      • Ordinarily CPU utilization isn't a good measure of how busy a system is because the scheduler uses CPU.
    • A month later tackling bottlenecks 2 million connections per server was achieved.
      • BEAM utilization at 80%, close to where FreeBSD might start paging. CPU is about the same, with double the connections. Scheduler is hitting contention, but running pretty well.
    • Seemed like a good place to stop so started profiling Erlang code.
      • Originally had two Erlang processes per connection. Cut that to one.
      • Did some things with timers.
    • Peaked at 2.8M connections per server
      • 571k pkts/sec, >200k dist msgs/sec
      • Made some memory optimizations so VM load was down to 70%.
    • Tried 3 million connections, but failed.
      • See long message queues when the system is in trouble. Either a single message queue or a sum of message queues.
      • Added to BEAM instrumentation on message queue stats per process. How many messages are being sent/received, how fast.
      • Sampling every 10 seconds, could see a process had 600K messages in its message queue with a dequeue rate of 40K with a delay of 15 seconds. Projected drain time was 41 seconds.
  • Findings:
    • Erlang + BEAM + their fixes - has awesome SMP scalability. Nearly linear scalability. Remarkable. On a 24-way box can run the system with 85% CPU utilization and it's keeping up running a production load. It can run like this all day.
      • Testament to Erlang's program model.
      • The longer a server has been up it will accumulate long running connections that are mostly idle so it can handle more connections because these connections are not as busy per connection.
    • Contention was biggest issue.
      • Some fixes were in their Erlang code to reduce BEAM's contention issues.
      • Some patched to BEAM.
      • Partitioning workload so work didn't have to cross processors a lot.
      • Time-of-day lock. Every time a message is delivered from a port it looks to update the time-of-day which is a single lock across all schedulers which means all CPUs are hitting one lock.
      • Optimized use of timer wheels. Removed bif timer
      • Check IO time table grows arithmetically. Created VM thrashing has the hash table would be reallocated at various points. Improved to use geometric allocation of the table.
      • Added write file that takes a port that you already have open to reduce port contention.
      • Mseg allocation is single point of contention across all allocators. Make per scheduler.
      • Lots of port transactions when accepting a connection. Set option to reduce expensive port interactions.
      • When message queue backlogs became large garbage collection would destabilize the system. So pause GC until the queues shrunk.
    • Avoiding some common things that come at a price.
      • Backported a TSE time counter from FreeBSD 9 to 8. It's a cheaper to read timer. Fast to get time of day, less expensive than going to a chip.
      • Backported igp network driver from FreeBSD 9 because having issue with multiple queue on NICs locking up.
      • Increase number of files and sockets.
      • Pmcstat showed a lot of time was spent looking up PCBs in the network stack. So bumped up the size of the hash table to make lookups faster.
    • BEAM Patches
      • Previously mentioned instrumentation patches. Instrument scheduler to get utilization information, statistics for message queues, number of sleeps, send rates, message counts, etc. Can be done in Erlang code with procinfo, but with a million connections it's very slow.
      • Stats collection is very efficient to gather so they can be run in production.
      • Stats kept at 3 different decay intervals: 1, 10, 100 second intervals. Allows seeing issues over time.
      • Make lock counting work for larger async thread counts.
      • Added debug options to debug lock counters.
    • Tuning
      • Set the scheduler wake up threshold to low because schedulers would go to sleep and would never wake up.
      • Prefer mseg allocators over malloc.
      • Have an allocator per instance per scheduler.
      • Configure carrier sizes start out big and get bigger. Causes FreeBSD to use super pages. Reduced TLB thrash rate and improves throughput for the same CPU.
      • Run BEAM at real-time priority so that other things like cron jobs don't interrupt schedule. Prevents glitches that would cause backlogs of important user traffic.
      • Patch to dial down spin counts so the scheduler wouldn't spin.
    • Mnesia
      • Prefer os:timestamp to erlang:now.
      • Using no transactions, but with remote replication ran into a backlog. Parallelized replication for each table to increase throughput.
    • There are actually lots more changes that were made.

Lessons

  • Optimization is dark grungy work suitable only for trolls and engineers. When Rick is going through all the changes that he made to get to 2 million connections a server it was mind numbing. Notice the immense amount of work that went into writing tools, running tests, backporting code, adding gobs of instrumentation to nearly every level of the stack, tuning the system, looking at traces, mucking with very low level details and just trying to understand everything. That's what it takes to remove the bottlenecks in order to increase performance and scalability to extreme levels.
  • Get the data you need. Write tools. Patch tools. Add knobs. Ken was relentless in extending the system to get the data they needed, constantly writing tools and scripts to the data they needed to manage and optimize the system. Do whatever it takes.
  • Measure. Remove Bottlenecks. Test. Repeat. That's how you do it.
  • Erlang rocks! Erlang continues to prove its capability as a versatile, reliable, high-performance platform. Though personally all the tuning and patching that was required casts some doubt on this claim.
  • Crack the virality code and profit. Virality is an allusive quality, but as WhatsApp shows, if you do figure out, man, it's worth a lot of money.
  • Value and employee count are now officially divorced. There are a lot of force-multipliers out in the world today. An advanced global telecom infrastructure makes apps like WhatsApp possible. If WhatsApp had to make the network or a phone etc it would never happen. Powerful cheap hardware and Open Source software availability is of course another multiplier. As is being in the right place at the right time with the right product in front of the right buyer.
  • There's something to this brutal focus on the user idea. WhatsApp is focussed on being a simple messaging app, not being a gaming network, or an advertising network, or a disappearing photos network. That worked for them. It guided their no ads stance, their ability to keep the app simple while adding features, and the overall no brainer it just works philosohpy on any phone.
  • Limits in the cause of simplicity are OK. Your identity is tied to the phone number, so if you change your phone number your identity is gone. This is very un-computer like. But it does make the entire system much simpler in design.
  • Age ain't no thing. If it was age discrimination that prevented WhatsApp co-founder Brian Acton from getting a job at both Twitter and Facebook in 2009, then shame, shame, shame.
  • Start simply and then customize. When chat was launched initially the server side was based on ejabberd. It's since been completely rewritten, but that was the initial step in the Erlang direction. The experience with the scalability, reliability, and operability of Erlang in that initial use case led to broader and broader use.
  • Keep server count low. Constantly work to keep server counts as low as possible while leaving enough headroom for events that create short-term spikes in usage. Analyze and optimize until the point of diminishing returns is hit on those efforts and then deploy more hardware.
  • Purposely overprovision hardware. This ensures that users have uninterrupted service during their festivities and employees are able to enjoy the holidays without spending the whole time fixing overload issues.
  • Growth stalls when you charge money. Growth was super fast when WhatsApp was free, 10,000 downloads a day in the early days. Then when switching over to paid that declined to 1,000 a day. At the end of the year, after adding picture messaging, they settled on charging a one-time download fee, later modified to an annual payment.
  • Inspiration comes from the strangest places. Experience with forgetting the username and passwords on Skype accounts drove the passion for making the app "just work."

Related Articles


Evernote helps you remember everything and get organized effortlessly. Download Evernote.

Tuesday 31 December 2013

Å·hat | 10 Books for Data Enthusiasts

Source: http://blog.yhathq.com/posts/ten-data-books.html
August 11, 2013

Over the last few years, I've invested a lot of time exploring various areas of data analysis and software development. Going down the proverbial coding rabbit hole, I've quietly accumulated a lot of books on various subjects.
This is a post about 10 data books that I've gotten a lot of milage out of and that really have legs.
  1. Programming Collective Intelligence by Toby Segaran

    Synopsis
    An overview of machine learning and the key algorithms in use today. Each chapter outlines a problem, defines an approach to solving it using a particular algorithm, and then gives you all the sample code you need to solve it.
    Why you should read it
    One of my favorite books (non-techincal and technical). I try to re-read it at least once per year. Great explanations of how you can make machine learning useful.
    Everyone has something to learn from PCI. My only criticism--the code is indented with 2 spaces instead of 4. Nitpicky, but annoying. Despite the fact that this is one of the oldest books on the list, it has managed to stay extremely relevant in the ever changing landscape of data analysis tools.
  2. Machine Learning for Hackers by Drew Conway and John Myles White

    Synopsis
    A series of real world case studies and solutions which use machine learning. This is a very practical approach to machine learning. The visuals are great and there are plenty of code samples to go around. A few of the chapters focusing on text classification/regression are particularly well done.
    Why you should read it
    I was on the pre-order list for this one. It was a gruelling 3 months on the waiting list but when it arrived Machine Learning for Hackers didn't disappoint. The code examples are optimized for readability rather than optimization which makes it much easier to follow along in the book (and translate them to other languages if need be). The code examples were also translated into Python , so I've included the Python logo even though it's not actually in the book.
  3. Super Crunchers by Ian Ayres

    Synopsis
    A collection of stories about data, modeling, and analysis, Super Crunchers tells how data and analysis are used in practice. Some of the examples are a little dated, but the core message stands the test of time.
    Why you should read it
    It's a lot higher level than most of the books on this list, and is geared for people who might not actually be doing the analysis or the modeling. Still, Super Crunchers is a great read and if you happen to be an analyst or data scientist, this will give you some insight into how the rest of the world views your work (for better or worsee). The most important takeaway from the book is not neccessarily what algorithms or technologies are being applied, but how they're being applied and how they're changing the way that companies use their data.
  4. Python for Data Analysis by Wes McKinney

    Synopsis
    A few years ago Wes McKinney took one for the team. He quit his job and wrote pandas, the open source Python package for wrangling data. Naturally Wes is the best person to write the book on pandas. The title may be a little misleading but Python for Data Analysis shows you the ins and outs of using pandas to improve your workflow.
    Why you should read it
    pandas is a must have for doing analysis with Python. This book focuses more on munging, wrangling, and formatting data (not modeling which many people incorrectly assume). So if you need brush up on your data wrangling (and you probably do) grab this off the shelf.
  5. R Cookbook by Paul Teetor

    Synopsis
    Pretty straightforward. A series of recipes for problems frequently encountered when doing analysis. Things like: building a regession model, merging data, imputing values, file i/o, etc.
    Why you should read it
    R can be a prickly language. The syntax is a little strange when you first start, everything is in tabular form, and weird stuff just tends to happen in general . This is the perfect book for when you have a question like:
    "I just want to loop through a bunch of files and combine them together. I know exactly how I'd to it in Python, but how the heck do I do it in R?"
    I strongly recommend this book if you're learning R, especially if you're coming form another programming language. It'll sit on your desk at work forever and you're guaranteed to pick it up at least a couple times per week.
  6. The Signal and the Noise by Nate Silver

    Synopsis
    A great overview of how predictions impact different parts of our lives. The book follows a similar pattern to Super Crunchers, telling stories related to data and prediction, and then tying them all together at the end. A great, quick read for anyone interested in data or analysis.
    Why you should read it
    Just because it's on The Internet doesn't mean it's true. Same goes with data. If you stare at a chart for long enough, a trend begins to emerge. The Signal and the Noise does a great job at teaching you when to throw up a warning flag when someone hands you some analysis.
  7. Visualize This by Nathan Yau

    Synopsis
    This is essentially the first couple years of Nathan Yau's blog, Flowing Data, in book format. There are great code examples to go along with some truely spectacular visuals.
    Why you should read it
    You can't show off your work with out some nifty data visuals. This book takes you step by step and shows you how you it's easy to construct great looking charts, maps, and other visuals if you use the right tools.
  8. ggplot2: Elegant Graphics for Data Analysis by Hadley Wickham

    Synopsis
    The name pretty much sums it up. This book shows you how to use ggplot2 by walking you through some examples and gradually adding complexity.
    Why you should read it
    If you're going to use R, you're inevitably going to be using ggplot2. ggplot2 is one the most popular R packages and probably the standard for making great looking visualizations. Who better to teach you how to use ggplot2 than the package's creator, Hadley Wickham. The book provides some core examples for making basic plots, and then exapnds on each of these by detailing some of the more in depth and advanced features of ggplot2 which makes it great for both beginners and advanced users.
  9. The NLTK Books by Jacob Perkins, Steven Bird, Ewan Klein, and Edward Loper

    Synopsis
    The Natural Language Toolkit (NLTK) is an excellent Python library for processing text and language. It has excellent APIs that can preproces, classify, and help analyze your text. The Cookbook and the freely available online book serve as the instruction manuals for using NLTK.
    Why you should read it
    Text analytics is really fun. Some of the examples in the NLTK books are really just magical (the text classification chapter is particularly cool ). Some of the code examples use a lot of the Python syntactic sugar which can make it a little difficult to read for someone who is new to Python, but the breadth of examples more than makes up for it. Top it all off with a really amazing library and it makes for a great read.
  10. Think Stats by Allen B. Downey

    Synopsis
    This book provides a gentle overview to statistics and a nice tutorial on using Python as well. It's sort of a crash course in statistics for those of us who chose to major in something less mathy in school.
    Why you should read it
    It's short, sweet, and to the point. Think Stats serves as the introduction to statistics course that many people missed out on in school. If you need to brush up on CDFs, PDFs, Normal Variates, or the Central Limit Theorem, then this is the book you're looking for. Also not a bad way to learn Python while picking up some stats skills.

Other Books

A few others that didn't quite make the list but we still love:
Let us know if there are any others you think we missed!

Sent from Evernote

Top Posts of 2013: Big Data Beyond MapReduce: Google's Big Data Papers | Architects Zone

Source: http://architects.dzone.com/articles/big-data-beyond-mapreduce

Mainstream Big Data is all about MapReduce, but when looking at real-time data, limitations of that approach are starting to show. In this post, I'll review Google's most important Big Data publications and discuss where they are (as far as they've disclosed).

MapReduce, Google File System and Bigtable: the mother of all big data algorithms

Chronologically the first paper is on the Google File System  from 2003, which is a distributed file system. Basically, files are split into chunks which are stored in a redundant fashion on a cluster of commodity machines (Every article about Google has to include the term "commodity machines"!)
Next up is the MapReduce  paper from 2004. MapReduce has become synonymous with Big Data. Legend has it that Google used it to compute their search indices. I imagine it worked like this: They have all the crawled web pages sitting on their cluster and every day or so they ran MapReduce to recompute everything.
Next up is the Bigtable  paper from 2006 which has become the inspiration for countless NoSQL databases like Cassandra, HBase, and others. About half of the architecture of Cassandra is modeled after BigTable, including the data model, SSTables, and write-through-logs (the other half being Amazon's Dynamo database for the peer-to-peer clustering model).

Percolator: Handling individual updates

Google didn't stop with MapReduce. In fact, with the exponential growth of the Internet, it became impractical to recompute the whole search index from scratch. Instead, Google developed a more incremental system, which still allowed for distributed computing.
Now here is where it's getting interesting, in particular compared to what common messages from mainstream Big Data are. For example, they have reintroduced transactions, something NoSQL still tells you that you don't need or cannot have if you want to have scalability.
In the Percolator  paper from 2010, they describe how the Google is keeping its web search index up to date. Percolator is built on existing technologies like Bigtable, but adds transactions and locks on rows and tables, as well as notifications for change in the tables. These notifications are then used to trigger the different stages in a computation. This way, the individual updates can "percolate" through the database.
This approach is reminiscent of stream processing frameworks (SPFs) like Twitter's Storm , or Yahoo's S4 , but with an underlying data base. SPFs usually use message passing and no shared data. This makes it easier to reason about what is happening, but also has the problem that there is no way to access the result of the computation unless you manually store it somewhere in the end.

Pregel: Scalable graph computing

Eventually, Google also had to start mining graph data like the social graph in an online social network, so they developed Pregel , published in 2010.
The underlying computational model is much more complex than in MapReduce: Basically, you have worker threads for each node which are run in parallel iteratively. In each so-called superstep, the worker threads can read messages in the node's inbox, send messages to other nodes, set and read values associated with nodes or edges, or vote to halt. Computations are run till all nodes have voted to halt. In addition, there are also Aggregators and Combiners which compute global statistics.
The paper shows how to implement a number of algorithms like Google's PageRank, shortest path, or bipartite matching. My personal feeling is that Pregel requires even more rethinking on the side of the implementor than MapReduce or SPFs.

Dremel: Online visualizations

Finally, in another paper from 2010, Google describes Dremel , which is an interactive database with an SQL-like language for structured data. So instead of tables with fixed fields like in an SQL database, each row is something like a JSON object (of course, Google uses it's own protocol buffer  format). Queries are pushed down to servers and then aggregated on their way back up and use some clever data format for maximum performance.

Big Data beyond MapReduce

Google didn't stop with MapReduce, but they developed other approaches for applications where MapReduce wasn't a good fit, and I think this is an important message for the whole Big Data landscape. You cannot solve everything with MapReduce. You can make it faster by getting rid of the disks and moving all the data to in-memory, but there are tasks whose inherent structure makes it hard for MapReduce to scale.
Open source projects have picked up on the more recent ideas and papers by Google. For example, ApacheDrill  is reimplementing the Dremel framework, while projects like Apache Giraph  and Stanford's GPS  are inspired by Pregel.
There are still other approaches as well. I'm personally a big fan of stream mining (not to be confused with stream processing) which aims to process event streams with bounded computational resources by resorting to approximation algorithms. Noel Welsh has some interesting slide's  on the topic.

Sent from Evernote

Wednesday 18 December 2013

Using RequireJS with Angular - Inline Block's Blog


Since attending Fluent Conf 2013 and watching the many AngularJS talks and seeing the power of its constructs, I wanted to get some experience with it.
Most of the patterns for structuring the code for single page webapps, use some sort dependency management for all the JavaScript instead of using global controllers or other similar bad things. Many of the AngularJS examples seem to follow these bad-ish patterns. Using angular.module('name' , []), helps this problem (why don't they show more angular.module() usage in their tutorials?), but you can still end up with a bunch of dependency loading issues (at least without hardcoding your load order in your header). I even spent time talking to a few engineers with plenty experience with Angular and they all seemed to be okay with just using something like Ruby's asset pipeline to include your files (into a global scope) and to make sure everything ends up in one file in the end via their build process. I don't really like that, but if you are fine with that, I'd suggest you do what you are most comfortable with.

Why RequireJS?

I love using RequireJS. You can async load your dependencies and basically remove all globals from your app. You can use r.js to compile all your JavaScript into a single file and minify that easily, so that your app loads quickly.
So how does this work with Angular? You'd think it would be easy when making single page web apps. You need your 'module' aka your app. You add the routing to your app but to have your routing, you need the controllers and to have your controllers you need the module they belong to. If you do not structure your code and what you load in with Require.js in the right order, you end up with circular dependencies.

Example

So below for my directory structure. My module/app is called "mainApp".
My base public directory:
directory listing
123456789101112131415
index.html- javascripts    - controllers/    - directives/    - factories/    - modules/    - routes/    - templates/    - vendors/      require.js      jquery.js    main.js    require.js- stylesheets/  ...
Here is my boot file, aka my main.js.
javascripts/main.js
12345678910111213141516171819
require.config({  baseUrl: '/javascripts',  paths: {    'jQuery': '//ajax.googleapis.com/ajax/libs/jquery/1.10.1/jquery.min',    'angular': '//ajax.googleapis.com/ajax/libs/angularjs/1.0.7/angular',    'angular-resource': '//ajax.googleapis.com/ajax/libs/angularjs/1.0.7/angular-resource'  },  shim: {    'angular' : {'exports' : 'angular'},    'angular-resource': { deps:['angular']},    'jQuery': {'exports' : 'jQuery'}  }});require(['jQuery', 'angular', 'routes/mainRoutes'] , function ($, angular, mainRoutes) {  $(function () { // using jQuery because it will run this even if DOM load already happened    angular.bootstrap(document , ['mainApp']);  });});
You'll notice how I am not loading my mainApp in. Basically we are bringing in the last thing that needs to configured for your app to load, to prevent circular dependencies. Since the Routes need the mainApp controllers and the controllers need the mainApp module, we just have them directly include the mainApp.js.
Also we are configuring require.js to bring in angular and angular-resource (angular-resource so we can do model factories).
Here is my super simple mainApp.js
javascripts/modules/mainApp.js
123
define(['angular' , 'angular-resource'] , function (angular) {  return angular.module('mainApp' , ['ngResource']);});
And here is my mainRoutes file:
javascripts/routes/mainRoutes.js
123456
define(['modules/mainApp' , 'controllers/listCtrl'] , function (mainApp) {  return mainApp.config(['$routeProvider' , function ($routeProvider) {    $routeProvider.when('/' , {controller: 'listCtrl' , templateUrl: '/templates/List.html'});  }]);});
You will notice I require the listCtrl, but actually use its reference. Including it adds it to my mainApp module so it can be used.
Here is my super simple controller:
javascripts/controllers/listCtrl.js
12345
define(['modules/mainApp' , 'factories/Item'] , function (mainApp) {  mainApp.controller('listCtrl' , ['$scope' , 'Item' , function ($scope , Item) {    $scope.items = Item.query();  });});
So you'll notice, I have to include that mainApp again, so I can add the controller to it. I also have a dependency on Item, which in this case is a factory.The reason I include that, is so that it gets added to the app, so the dependency injection works. Again, I don't actually reference it, I just let dependency injection do its thing.
Lets take a look at this factory really quick.
javascripts/factories/Item.js
12345
define(['modules/mainApp'] , function (mainApp) {  mainApp.factory('Item' , ['$resource' , function ($resource) {    return $resource('/item/:id' , {id: '@id'});  }]);});
Pretty simple, but again, we have to pull in that mainApp module to add the factory to it.
So finally lets look at our index.html, most if it is simple stuff, but the key part is the ng-view portion, which tells angular where to place the view. Even if you don't use document in your bootstrap and opt to use a specific element, you still need this ng-view.
index.html
123456789101112
  <!DOCTYPE html><html><head>  <title>Angular and Require</title>  <script src="/javascripts/require.js" data-main="javascripts/main"></script></head><body>  <div class="page-content">    <ng:view></ng:view>  </div></body></html>
Posted by Inline Block Jun 6th, 2013 amd, angular, angularjs, coding, javascript, requirejs

Like
Share
3 people like this. Be the first of your friends.

Sent from Evernote