您的位置:首页 > 大数据

10 sites to get the large data set or data corpus for free

2013-05-21 21:10 537 查看
You may require GBs of data to do performance or load testing. How your app behaves when there is loads of data. You need to know the capacity of your application. This is the frequently asked question from the sales team "The customer is having 100GB of data
and he wants to know whether our product will handle this? If so how much RAM / Disk storage required?". This article has pointers to the large data corpus.

How to generate that data? The easiest way would be to have some samples of data, multiply it using some scripts. Another option would be to create data using random values. The main disadvantage of this approach is the data will have very less unique content
and it may not give desired results. Below are some links to get large data set.

Wikipedia:Database, Wikipedia offers free copies of all available content to interested users. data is available in multiple languages. Content along with images could be downloaded.
http://en.wikipedia.org/wiki/Wikipedia:Database_download
Common crawl builds and maintains an open crawl of the web accessible to everyone. The data is stored in amazon s3bucket and the requester may have spend some money to access it.
https://www.commoncrawl.org/

EDRM File Formats Data Set, consists of 381 files covering 200 file formats.
http://www.edrm.net/resources/data-sets/edrm-file-format-data-set
Apache Mahout TLP project to create scalable, machine learning algorithms. Mahout has many links to get free and paid corpus data.
https://cwiki.apache.org/confluence/display/MAHOUT/Collections
EDRM Enron Email Data Set v2 consist of Enron e-mail messages and attachments in two sets of downloadable compressed files: XML and PST.
http://www.edrm.net/resources/data-sets/edrm-enron-email-data-set-v2
ClueWeb09 dataset was created to support research on information retrieval and related human language technologies. It consists of about 1 billion web pages in ten languages that were collected in January and February 2009. The dataset is used
by several tracks of the TREC conference. http://lemurproject.org/clueweb09/
DMOZ - Open Directory Project is the largest, most comprehensive human-edited directory of the Web. It has collections of URLs in different category. Dmoz is one main source for internet search engines.
http://www.dmoz.org/rdf.html

theinfo.org - This is a site for large data sets and the people who love them: the scrapers and crawlers who collect them, the academics and geeks who process them, the designers and artists who visualize them. It's a place where they can exchange
tips and tricks, develop and share tools together, and begin to integrate their particular projects.
http://theinfo.org/

Project Gutenberg offers over 36,000 free ebooks to download to your PC, Kindle, Android, iOS or other portable device.
http://www.gutenberg.org/

Million song data set, has data related to tracks and artist. http://labrosa.ee.columbia.edu/millionsong/pages/additional-datasets
Mailing list archive, Subscribe to any mailing list, you will get dozens of emails. Many groups or communities provide option to download the mail archive.

Sometimes we may not get the type or format of data we want. In this situitation, we could get these data, write some script to convert it to our desired format.

This large data set helps to do load test your app and understand its capacity and bottleneck. Using these data set you cannot validate the test results. If you build a search engine, you cannot verify that these many number of hits should be returned for a
given keyword.

Now every company is moving towards cloud. People talk about big data but there is some way to generate these data, so that the application could be well tested.
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  大数据集
相关文章推荐