This document discusses the history and future of big data and distributed computing frameworks. It notes that the amount of data in the world has grown enormously from 130GB in 1975 to 2.9ZB in 2012. Hadoop is an open-source software framework that supports distributed applications on large datasets using a distributed file system (HDFS) and MapReduce programming model. Main contributors to the ecosystem include HDFS, MapReduce, HBase, Hive, Pig, ZooKeeper and projects in the Apache incubator like Flume. The future includes improving high availability, scalability, and alternative processing models in core Hadoop and further developing the ecosystem through projects like Apache BigTop.