Obsolete:Media server/Distributed File Storage choices
Introduction
Please examine this evaluation of the existing distributed file storage choices. I only considered Open Source solutions. I'm eliminating certain columns:
- MooseFS
- goes because the IRC community only had three people when I checked it in November 2010, plus their metadata server is only a hot backup, not a replicated / redundant server.
- Ceph
- Eliminated because it is not production code yet (although it looks good).
- XtreemFS
- Eliminated because its metadata server is a single point of failure with no obvious path to replication.
- Homewbrew
- Eliminated because nobody but me likes it.
Please look at the remaining DFS choices, Mogile, OpenStack, and GlusterFS to see if you can veto (or praise) any of them. I'd like to have a binary choice when we decide to go. RussNelson 02:27, 24 November 2010 (UTC)
There is code in MediaWiki extensions for MogileFS --RussNelson 02:12, 15 March 2011 (UTC)
- Was an experiment by Domas IIRC Hashar 12:29, 7 December 2010 (UTC)
Performance Requirements
See Media server/Performance requirements.
Monitoring Requirements
- Ganglia might help : http://ganglia.wikimedia.org/
- Hrm. It only reports CPU and memory, not network load. RussNelson 21:51, 15 December 2010 (UTC)
- Do not forget to take in account the strategic plan up to 2015. Those statistics should sky rocket in the next couple years. Hashar 12:29, 7 December 2010 (UTC)
- I didn't find anything detailed enough to affect these plans in any specific way RussNelson 21:51, 15 December 2010 (UTC)
- Ganglia might help : http://ganglia.wikimedia.org/
Adaptation
Right now, the original and thumb server are NFS servers. The thumbnailer is reading files from the originals server, and writing them back to the thumb server. The apache servers have the original and thumb servers mounted via NFS. So, if we are to use a distributed filesystem, the API to them needs to either appear in the filesystem "just like" NFS, or else we need to modify all the code which touches the filesystem so that it goes through the DFS API instead. This section documents, for the top three candidates, what will be required to adapt our code to their interface.
Mogile
Mogile has a PHP interface, amusingly derived from the experimental MediaWiki code added by Domas. Probably better to bring Domas' code up to date with the current MediaWiki code. Mogile does not support FUSE, but Danga has experimental code for it.
OpenStack
OpenStack's REST interface apparently doesn't support deep levels of directories. Plus accesses need an authorization token. This bodes ill for the idea of directly exposing it to the caches (unless the caches can add a header) and letting it look up our existing files via REST. Nor does it support filesystem access via FUSE. It's not really a "file" system ... but more like an object store. No searching of metadata (which is stored in the Linux filesystem's xattrs).
Design has a hard limit of 5GB/file. Containers get slow if they have more than a million files. We currently have 10M originals. Thus, we will have to apportion files across containers.
GlusterFS
There are two recommended interfaces to GlusterFS: NFS (which we're pretty sure we don't want to use, BUT it would be a drop-in solution), and their native client. It exposes files via a POSIX interface to the Linux FUSE client in user space. I don't have any experience with it, but I am dubious of the idea of running a file system in user space. First, it seems like a kludge, second it seems like it would be slow, with multiple context switches into the kernel, and third it seems like it would be unreliable. But those a fears; not necessarily rational.
- Using fuse in the thumbnailers wouldn't need to slow things. The actual network fetching would be orders of magnitude slower. Platonides 23:03, 14 December 2010 (UTC)
Installation
We decided to eliminate GlusterFS at least for now, and try running Mogile and OpenStackSwift. I'm going to document my experiences installing both, so that we have an idea of how difficult it is.
Mogile
Following these instructions: http://code.google.com/p/mogilefs/wiki/InstallHowTo
Mogile consists of three packages: Server, Client, and Utils. They're written in Perl (and some C), but install straightforwardly. I used the 'cpan' program on Ubuntu 10.4 to install the dependencies. The Server requires the Client to be installed first. Attempts to build the Makefile.PL cause it to check for missing dependencies. It was sufficient to run "sudo cpan" and name the missing Perl module on the command line. The modules install into system libraries, which is why they require root. Here are the commands I use after fetching the packages:
for i in MogileFS-*; do tar xpfz $i; done sudo apt-get install mysql-server sudo mysql -p # and then issue the commands in the Moguile docs to create the database sudo cpan Danga::Socket sudo cpan IO::AIO sudo cpan Net::Netmask sudo cpan Perlbal cd MogileFS-Client-1.13/ sudo cpan IO::WrapTie perl Makefile.PL make make test sudo make install cd ../MogileFS-Utils-2.18/ perl Makefile.PL make make test sudo make install cd ../MogileFS-Server-2.44/ perl Makefile.PL make make test sudo make install
Note that the "make test" fails unless you give it the database password per the documentation. Even then it's still throwing various errors. I'm going to track them down; errors in published tests make me worry.
OpenStackSwift
Following the instructions given here: http://swift.openstack.org/howto_installmultinode.html
The installation went fairly well. I ran into a few documentation problems, and spent some time rewriting / reordering things. It's better now. It should be fairly easy to write a Puppet script which will turn a machine into a storage node, and register with the object ring.
Documentation for Swift in our environment is now on its own page.
Testing
I have five laptops configured to serve as both a mogile and a swift cluster. They have 8GB SD cards for storage, and the Ubuntu and cluster software is stored on 4GB USB drivers. They are borrowed machines, (and I can borrow more), but I also can't use the hard drives. They're probably reasonable for comparing mogile against swift, although of course for eventual performance of the cluster, this will not be a valid test.
Writing
I have a 5MB file produced using:
dd if=/dev/zero of=bigfile1.tgz count=10240
I uploaded that file to both clusters. The current machine is a.ra-tes.org. It is running the database for Mogile and the sole tracker. It is also running the auth server and the only proxy server for Swift.
root@a:~# time st -A https://a.ra-tes.org:11000/v1.0 -U system:root -K testpass upload myfiles bigfile1.tgz bigfile2.tgz real 0m5.300s user 0m1.260s sys 0m0.368s root@a:~# time mogtool inject --trackers=127.0.0.1:7001 --domain=testdomain bigfile1.tgz BigFile1 file BigFile2: 5f363e0e58a95f06cbe9bbc662c5dfb6, len = 5242880 Spawned child 14343 to deal with chunk number 1. chunk 1 saved in 1.03 seconds. Child 14343 successfully finished with chunk 1. Replicating: 1 Replicating: 1 Replicating: 1 Replicating: 1 real 0m7.266s user 0m0.692s sys 0m0.188s
Mogile is generally slower than Swift at writing, by about 1/2 second, except that for one trial, it was faster by 1/2 second. Repeating the same upload ten times in a row took the same time for bot Mogile and Swift: 60.8 seconds. Replicating it was 57.3s for Mogile and 59.8 for Swift. Running two Mogile uploads simultaneously took 66s and 64.8s. Interestingly, running two Swift uploads simultaneously took 92.4 seconds and 96.2 seconds. Using the Swift's client (st) native threading takes 30.7 seconds to upload ten files. Running two of them at a time takes 62.5 and 62.9 seconds.
Made a set of small files as so:
for i in 1 2 3 4 5 6 7 8 9 10; do dd if=/dev/zero of=smallfile$i.png count=$i bs=5120; done
Swift uploaded ten files in ten seconds. Mogile took 39 seconds. Swift uploaded them in 1.5s using its native threading.
Ariel suggested that I do a test using files more representative of our files. So, I fetched 73 thumbnails (including two originals), and stored them in my cluster. 2/3rds of the files were under 1K in size. Biggest was 81K. According to du, 704K. According to "cat * | wc -c" 456K. Remember that this is a very wimpy cluster, so times should be compared against DFSes not against your expectations. Took 2m45s to store them into Swift. Disk space increase was 76636-74820 on b, 75668 - 73712 on c, 68592 - 67644 on d. (didn't check the size on a). Took 4m41s to store them into Mogile. Disk space increase was 76876 - 76636 on b, 75988 - 75668 on c, 68956 - 68592 on d. One machine (e) was unavailable while this operation took place. It seems to have a dodgy Ethernet .. but that seems like a fair test, because if either one can't handle a down host, then we don't want it!
Reading
root@a:~# time st-null -A https://a.ra-tes.org:11000/v1.0 -U system:root -K testpass download myfiles bigfile2.tgz bigfile2.tgz real 0m1.431s user 0m0.892s sys 0m0.112s root@a:~# time mogtool extract --overwrite --trackers=127.0.0.1:7001 --domain=testdomain BigFile2 /dev/null Fetching piece 1... Trying http://10.0.0.75:7500/dev4/0/000/000/0000000007.fid... Done. real 0m1.970s user 0m1.192s sys 0m0.236s
Similarly, Mogile is slower than Swift at reading, again by about 1/2 second (but a larger percentage of time). To repeat the same fetch ten times, with Mogile took 19.6 seconds, but with Swift took 15.0 seconds. In both cases, it's writing the file to /dev/null. Reading two sets of ten concurrently using Swift took 22.5 and 22.7 seconds. The same test with Mogile took 20 and 20.4 seconds. Using Swift's client (st) native threading takes 7 seconds to download ten files. Running two of them at a time takes 9.1 and 13.4 seconds.
Pulling down the small file set using Swift took 5.8s. Using Mogile took 7.3 seconds. Using Swift's native threading took 0.77 seconds.
Live Testing
We've decided to do live testing of Swift, just to make sure sure that we can make this work for us. To do this, we'll set up a cluster containing 1/256th of the current media store. Then we'll use the squid's ability to rewrite URLs. We'll turn the project and language into a container name, and then use the rest of the filename as the object name. So, for example, we will look for
http://upload.wikimedia.org/wikipedia/en/2/25/Machinesmith.png
and turn it into
http://cluster.wikimedia.org/wikipedia%2Fen/2/25/Machinesmith.png
and look for
http://upload.wikimedia.org/wikipedia/en/thumb/1/18/Machinesmith.jpg/220px-Machinesmith.jpg
and turn it into
http://cluster.wikimedia.org/wikipedia%2Fen%2Fthumb/1/18/Machinesmith.jpg/220px-Machinesmith.jpg
Cluster.wikimedia.org will be a real cluster, but it will only have 1/256th of the current media store.
Live Test Configuration
Installed swift according to the multinode instructions (which I improved and have contributed back to OpenStack). After much angst with Squid (which hadn't been compiled with SSL enabled), and Swift's proxy (which turns out not to require SSL if you remove the certificate from the configuration), I discovered that you can load custom code into Swift's proxy. We're using this code for two ends. First, to reformat the path into that required by Swift's account/container/object path. We could insert a header containing the authorization token, but it's just as easy to mark the container as public so anyone may access it. Second, to recognize a missing file, fetch it from upload.wikimedia.org, and write it into the cluster.
To set a container's Read Access Control List so that anybody can read a container, you need to change the Read ACL (Access Control List). Use the 'st' command, like this:
st -A https://127.0.0.1:11000/v1.0 -U system:media -K password -r '.r:*' post project%2Flanguage
This can also be used to create a container which didn't previously exist.
Useful Docs
Anyway, this is useful: http://swift.openstack.org/admin_guide.html and this http://swift.openstack.org/development_saio.html and of course the root of their documentation: http://swift.openstack.org/ and WSGI (a python web service gateway interface): http://pythonweb.org/projects/webmodules/doc/0.5.3/html_multipage/lib/wsgi.html and the WSGI PEP: http://www.python.org/dev/peps/pep-0333/
http://pythonpaste.org/deploy/ explains the contents of these files:
/etc/swift/*-server.conf /usr/share/pyshared/swift-1.1.0.egg-info/entry_points.txt
Once you understand that, you'll be able to use http://eventlet.net to understand this code:
/usr/bin/swift-*-server /usr/share/pyshared/swift/common/wsgi.py
Rewrite
After much painful documentation reading and source browsing, I've finally figured out the relationship between the various bits and pieces from which Swift is comprised. We need middleware to adjust our incoming URLs to the swift schema, and we need middleware to act as a 404 handler so that we can create thumbnails as we do now. In the short term, we'll just fall back onto the Media Store to fetch thumbnails that don't exist. In the longer term, we'll have to locate the original image, fetch it, scale it, and write it back to the user / store it back into the cluster. It's possible that we can fold the scaler machines into the cluster and do scaling on the proxies (we'll need to have a lot of them anyway), or it's possible that we'll write a Swift Repo for the scaler. We'll certainly have to have Swift Repo code for the uploader, but that's a longer-term issue.
Let's take a tour of the middleware.
- First place to look is /etc/swift/proxy-server.conf . It's a standard Python Paste Deploy file with our 'rewrite' added to it. The Deploy system is designed to make it easy to create web services just by creating a config file that tells how everything is pasted together. The "egg" reference is to the distribution method for Python packages (Pythons are reptiles and lay eggs). Swift is the name of the egg, and the # portion tells what module out of the egg is being referenced. By default, a $THING_factory gets called, depending on whether it's an application or a filter (there are other types, but they're not in this config file).
- Look for our rewrite portions. They create a filter and put it into the proxy pipeline. Our Python code is on the Python path, and so we need only name it for Paste Deploy to be able to find it. It's stored in /usr/local/lib/python2.6/site-packages/wmf/rewrite.py on alsted.
- Rewrite imports webob. WebOb is part of the PythonPaste project: http://pythonpaste.org/webob/ . This is an object which keeps everything about a web server request and web server response in one place. It parses the headers and makes them easy to change.
- Rewrite uses a regexp to modify the incoming path to put it into the form that Swift needs (account/container/object). Note that the container cannot include internal slashes, and that the object may contain multiple slashes. We use our project names / languages / sections as the container name, and the name of the media file as the object name. We retain the hashed "subdirectories" so that we can locate the original file.
- Once we have the final Swift name, we continue with the pipeline. To prevent the pipeline from starting the HTTP response, we point that callable to a subroutine of our own which just saves the parameters.
- If we get a good response, we call the response with the appropriate header and body to return the file to them.
- If we get a bad response,
- we construct a URL pointing to the media store.
- We open up that URL to fetch the image out of the media store.
- We modify the request so that it becomes a 'PUT' to store the image into the cluster.
- As we read from the Media Store, we write to the cluster and return the media file to the user.
- if anything goes wrong, we delete the object off the cluster.
Modifying Mediawiki
Mediawiki has never virtualized its filesystem accesses. So you'll find filesystems calls sprinkled here and there throughout the code. Here are the files which I already know contain filesystem calls which modify the repository (unlink, rename, opendir/readdir/closedir, link, fopen).
- includes/filerepo/FSRepo
- .../LocalFile
- .../LocalRepo
- .../ForeignAPIFile
- includes/upload/UploadBase
- .../UploadStash
- includes/Math.php
Ariel points to /media, DjVuImage.php, MimeMagic.php, looks like the upload stash stuff (/upload/*php etc)... Math.php (as you already discovered)
Testing Mediawiki
- Upload a file. Visit http://ersch.wikimedia.org/index.php/Special:Upload and upload the image of your choice. Call it Test1.jpg
- Observe that all the thumbnail links work.
- Move the file. http://ersch.wikimedia.org/index.php/Special:MovePage/Test1.jpg . Move it to Test1a.jpg
- Delete the file. http://ersch.wikimedia.org/index.php?title=File:Test1a.jpg&action=delete
- Observe that you can't view the image anymore.
- Restore (undelete) the file by clicking on http://ersch.wikimedia.org/index.php/Special:Undelete/File:Test1a.jpg
- Observe that you can now view the file again.
- Upload a new version by clicking on http://ersch.wikimedia.org/index.php?title=Special:Upload&wpDestFile=Test1a.jpg&wpForReUpload=1
- Observe on the results page that both thumbnails are available. Click on the thumbs to view the full-size images.
- Revert an archived image so that it becomes the new images.
- Observe that this generates an error: "SwiftRepo::getZonePath: not implemented". As far as I know, this is the only missing part.
The Evaluation Matrix
Note http://leemoonsoo.blogspot.com/2009/04/simple-comparison-open-source.html for an older comparison. If a column contains the title of the column, it needs to be filled in with information.
Our Req't | Mogile | OpenStack's swift | GlusterFS |
---|---|---|---|
Endorsements | Last.fm, Digg, Veoh, Uh, Boobs&Kittens, LiveJournal, Six Apart, Friendster, Wikispaces, blip.tv manages 1.4 petabytes in MogileFS, Support for pear derived from Mediawiki support, Mediawiki??? | Rackspace Cloud Files, Cloudscaling (planned), SMEStorage (planned) | Listing of users, My list of testimonials |
Mailing list activity in the last month | 19 | 14, but IRC channel is active | 21 |
Documentation available | Sketchy, incomplete wiki. Most knowledge seems to be accumulated from IRC, mailing lists or reading source. | http://swift.openstack.org/ | http://www.gluster.com/community/documentation/index.php (using MediaWiki) |
Commercial support available? | apparently none | Although they claim commercial support exists, they don't make it easy to find! | Yes, from Gluster.com |
Supports Linux? | Yes | Yes | Yes |
Programming Languages | perl | python | C |
Single point of failure? | MySQL, use replication | No | No, uses hash to find files |
Runs on Commodity Hardware? | Yes | Yes | Yes |
NOT NFS! | Yes | Yes | Yes |
Direct HTTP or are
web front-ends needed? |
Front-end | Direct | Direct |
Seekable? | Yes, if the underlying HTTP server supports Range: header | Yes, using the Range: header, per IRC. | Claims POSIX compatibility, so it MUST, right? |
Load balanced for the same data? | Returns URLs in load balanced order, with first one known good | Issues a GET on all copies and returns the first one that responds, per IRC. | Yes |
What kind of API? | Library | REST | Library |
Authentication? | No | It's a plugin. Comes with a sample "devauth", but soon will have a production "swauth". | Yes; IP or password |
Supports unpublished files
for new upload? |
Yes | Yes; they have authen/author included. | Yes |
Rack awareness?
Off-site backups? |
Yes, see MogileFS and Multiple Networks/Datacenters. Can configure policies for cross-colo replication, # of replicas per colo, ZoneLocal will help to preferentially serve from local. IPs of each network must be in a range. | Yes | Not obviously |
Free of Proprietary Software? | Yes | Yes | Yes |
Maximum file size? | Limited only by toolchain (web server, underlying FS). How to stream uploads. | Unlimited read length; write 5GB chunks then paste them into an object, per IRC | 2^64 |
Maximum number of files? | Depends on MySQL config. 2^64 with --with-big-tables. | No limit, but write performance drops as you put more files into a container. We could do our own hashing. | 2^64 |
Replicated? | Yes, across hosts | Yes, via rsync | Yes |
Easy to add new boxes? | mogadm add host | Yes | Must restart |
Node failures detected and
removed automatically? |
Yes, often-failing node is taken out of rotation for writes/reads, will continue to be tested by monitoring job, in case it comes back. Files that were on that node are not re-replicated until node marked dead manually. | Yes, but "Replication is an area of active development, and likely rife with potential improvements to speed and correctness." | Yes, and updated with changes when restored to operation. |
What happens when a failed node comes back up? | Monitoring job should notice it is back and will eventually route writes/reads there | Uses a "push" model, where data holders make sure that their partners have the right data. | Any changes while it was down are re-synched. |
How are node failures reported? | syslog | swift-stats-report will tell you. | They have a management console with logs |
Is Nagios already supported? | Yes | As of this writing, no, per IRC. | 3rd party |
How are replication policy failures reported? | syslog | swift-stats-report will tell you. | They have a management console with logs |
Can we set # of copies? | Yes, configurable by app-defined "class" of file. Could store multiple copies of originals, fewer for thumbnails. | Yes | Yes; just add more volumes |
How long before a new file is replicated? | < 1 second if queue is empty | PUTs are done in parallel and success is reported when two have succeeded, per IRC. | Can't find it in the documentation, and #glusterfs isn't really good at answering questions. |
How are files replicated? | Desired replications queued in DB table. Then worker processes pick them off, use HTTP GET to fetch, pipe to an HTTP PUT request. | Each partition owner compares a hash of its partition against other partitions, and pushes the files over using rsync. | Uses a changelog to determine which copies need updating. |
Do updates appear everywhere simultaneously? | No, async | No, but deletes and new uploads do. | Yes, unless there are conflicting updates. |
Want: Files stored individually on filesystem, with name like MediaWiki title? | No, uses automatically assigned IDs. | No | Yes, but you can't predict which server will hold a file. |
Want: free of esoteric kernel features? | Yes | Yesbut requires xattrs on | Yesbut requires xattrs on |
Want: Consistency Audit? | Yes | Yes | Maybe not needed?? |
Want: Extensible? | Yes, hook-based system similar to MediaWiki. (Perl) | Yes, using wsgi (web services standard, many langs) | Yes, via plugins (C) |
Discarded distributed FS
Our Req't | MooseFS | Ceph
Not production code |
XtreemFS | Homebrew |
---|---|---|---|---|
Endorsements | MooseFS | Ceph | XtreemFS | Homebrew |
Mailing list activity in the last month | 64, but IRC channel is dead | 167 | 67 | 0 |
Supports Linux? | Yes | Yes | Yes | Yes |
Single point of failure? | Yes, metadata server can have
a hot backup. |
No | DIR and MRC | No |
Runs on Commodity Hardware? | Yes | Yes | Yes | Yes |
NOT NFS! | Yes | Yes | Yes | Yes |
File access via HTTP? | No | Ceph | No | Yes |
Replicated? | Yes | Both file and metadata | No | Yes |
Easy to add new boxes? | Yes | Yes | Yes | Yes |
Can preserve FileRepo layer? | POSIX via FUSE | POSIX with kernel module
Or direct access w/ library |
POSIX via FUSE | Yes |
Supports unpublished files
for new upload? |
Yes | Yes | XtreemFS | Yes |
Rack awareness?
Off-site backups? |
Not obviously | Not obviously | Yes | Yes |
Free of Proprietary Software? | Yes | Yes | Yes - GPL | Yes |
Want: Files stored as files with names? | No | Ceph | No | Yes |
Want: free of esoteric kernel features? | Yes | Ceph | Yes | Yes |
Want: Consistency Audit? | Yes | Ceph | Not obviously | Yes |
GlusterFS
I solicited advice from some people on the list of GlusterFS users. Tobias Wilken says "We didn't test the 3.1 version yet. With 3.0.5 we are quite happy, except of some special inotify issues." Ken Bigelow of Pytec Design is using 33 servers with 6 1TB drives interconnected using InfiniBand @ 40GB/sec. That's many fewer drives than most are using, but he's also getting very good performance out of his cluster where others who try to put petabyte arrays on single servers are having trouble. Sabuj Pattanayek of Vanderbilt University is having a bit of trouble with FUSE (user-space filesystems), and his writes tend to be slow, even across a striped system, but he's otherwise happy with GlusterFS.
Image test methodology
On the call today, people were surprised that 2/3rds of the media files were <1K. Here's how I came by that conclusion. I selected the set of pages by clicking on "Random Page" off the main page. I pulled the media files out of the HTML source. I expect that some of these files will be commonly used by multiple pages (e.g. Commons_logo) and so will live in the caches full-time, so that image usage in the page will not get translated into cluster accesses.
wget http://upload.wikimedia.org/wikipedia/en/thumb/7/7e/Replace_this_image_male.svg/150px-Replace_this_image_male.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Flag_of_Sweden.svg/22px-Flag_of_Sweden.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/e/eb/Swimming_pictogram.svg/50px-Swimming_pictogram.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Flag_of_Sweden.svg/25px-Flag_of_Sweden.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/1/18/Pictgram_swimming.svg/50px-Pictgram_swimming.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/8/88/Portrait_of_George_Washington.jpeg/27px-Portrait_of_George_Washington.jpeg wget http://upload.wikimedia.org/wikipedia/commons/9/9c/Dramalis.jpg wget http://upload.wikimedia.org/wikipedia/en/thumb/d/d2/Hells_Kitchen_title.png/245px-Hells_Kitchen_title.png wget http://upload.wikimedia.org/wikipedia/en/thumb/1/13/Padlock-olive-arrow.svg/20px-Padlock-olive-arrow.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/2/2b/Flag_of_the_Arab_League.svg/22px-Flag_of_the_Arab_League.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/b/b9/Flag_of_Australia.svg/22px-Flag_of_Australia.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/9/92/Flag_of_Belgium_%28civil%29.svg/22px-Flag_of_Belgium_%28civil%29.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/0/05/Flag_of_Brazil.svg/22px-Flag_of_Brazil.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/9/9a/Flag_of_Bulgaria.svg/22px-Flag_of_Bulgaria.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/c/cf/Flag_of_Canada.svg/22px-Flag_of_Canada.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/b/bc/Flag_of_Finland.svg/22px-Flag_of_Finland.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/c/c3/Flag_of_France.svg/22px-Flag_of_France.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/b/ba/Flag_of_Germany.svg/22px-Flag_of_Germany.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/c/c1/Flag_of_Hungary.svg/22px-Flag_of_Hungary.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/4/41/Flag_of_India.svg/22px-Flag_of_India.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/4/45/Flag_of_Ireland.svg/22px-Flag_of_Ireland.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/d/d4/Flag_of_Israel.svg/22px-Flag_of_Israel.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/0/03/Flag_of_Italy.svg/22px-Flag_of_Italy.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/9/9e/Flag_of_Japan.svg/22px-Flag_of_Japan.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/6/66/Flag_of_Malaysia.svg/22px-Flag_of_Malaysia.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/f/fc/Flag_of_Mexico.svg/22px-Flag_of_Mexico.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/2/20/Flag_of_the_Netherlands.svg/22px-Flag_of_the_Netherlands.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/3/3e/Flag_of_New_Zealand.svg/22px-Flag_of_New_Zealand.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/d/d9/Flag_of_Norway.svg/22px-Flag_of_Norway.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/1/12/Flag_of_Poland.svg/22px-Flag_of_Poland.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/5/5c/Flag_of_Portugal.svg/22px-Flag_of_Portugal.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/f/f0/Flag_of_Slovenia.svg/22px-Flag_of_Slovenia.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Flag_of_Sweden.svg/22px-Flag_of_Sweden.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/4/49/Flag_of_Ukraine.svg/22px-Flag_of_Ukraine.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/a/ae/Flag_of_the_United_Kingdom.svg/22px-Flag_of_the_United_Kingdom.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/a/a4/Flag_of_the_United_States.svg/22px-Flag_of_the_United_States.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/1/1c/Wiki_letter_w_cropped.svg/20px-Wiki_letter_w_cropped.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Wikiquote-logo-en.svg/40px-Wikiquote-logo-en.svg.png wget http://upload.wikimedia.org/wikipedia/en/thumb/9/99/Question_book-new.svg/50px-Question_book-new.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/f/f2/Prince_Edward_Road_East_and_West.jpg/220px-Prince_Edward_Road_East_and_West.jpg wget http://upload.wikimedia.org/wikipedia/en/thumb/2/26/Perw.jpg/220px-Perw.jpg wget http://upload.wikimedia.org/wikipedia/commons/thumb/e/e0/P1070289.JPG/220px-P1070289.JPG wget http://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/HK_Kln_Prince_Edward_Road_East_590.jpg/220px-HK_Kln_Prince_Edward_Road_East_590.jpg wget http://upload.wikimedia.org/wikipedia/commons/thumb/4/4a/Commons-logo.svg/30px-Commons-logo.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/5/5f/Disambig_gray.svg/30px-Disambig_gray.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/2/28/Bandeira_da_Bahia.svg/23px-Bandeira_da_Bahia.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/f/fb/Brazil_State_Bahia.svg/130px-Brazil_State_Bahia.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/9/9a/Erioll_world.svg/18px-Erioll_world.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Lajosmizse.jpg/250px-Lajosmizse.jpg wget http://upload.wikimedia.org/wikipedia/commons/thumb/c/c4/HUN_Lajosmizse_COA.jpg/100px-HUN_Lajosmizse_COA.jpg wget http://upload.wikimedia.org/wikipedia/commons/thumb/a/ac/Hungary_location_map.svg/250px-Hungary_location_map.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/c/c1/Flag_of_Hungary.svg/22px-Flag_of_Hungary.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/c/c5/HU_county_Bacs-Kiskun.svg/100px-HU_county_Bacs-Kiskun.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/9/9f/Philadelphia_colours.svg/16px-Philadelphia_colours.svg.png wget http://upload.wikimedia.org/wikipedia/en/thumb/1/1e/New_Haven_colours.svg/16px-New_Haven_colours.svg.png wget http://upload.wikimedia.org/wikipedia/en/thumb/7/76/New_York_colors.svg/16px-New_York_colors.svg.png wget http://upload.wikimedia.org/wikipedia/en/thumb/8/88/Northern_Raiders_colors.svg/16px-Northern_Raiders_colors.svg.png wget http://upload.wikimedia.org/wikipedia/en/thumb/8/8e/Aston_DSC-Glen_Mills_colours.svg/16px-Aston_DSC-Glen_Mills_colours.svg.png wget http://upload.wikimedia.org/wikipedia/en/thumb/9/94/Connecticut_colors.svg/16px-Connecticut_colors.svg.png wget http://upload.wikimedia.org/wikipedia/en/thumb/9/9e/Jacksonville_colors.svg/16px-Jacksonville_colors.svg.png wget http://upload.wikimedia.org/wikipedia/en/thumb/c/c5/New_Jersey_colours.svg/16px-New_Jersey_colours.svg.png wget http://upload.wikimedia.org/wikipedia/en/thumb/f/f5/Washington_DC_colours.svg/16px-Washington_DC_colours.svg.png wget http://upload.wikimedia.org/wikipedia/en/thumb/f/fc/Fairfax_Eagles_colors.svg/16px-Fairfax_Eagles_colors.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/1/17/Rio_Osun.jpg/250px-Rio_Osun.jpg wget http://upload.wikimedia.org/wikipedia/commons/thumb/9/99/Nigeria_Osun_State_map.png/250px-Nigeria_Osun_State_map.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/7/79/Flag_of_Nigeria.svg/22px-Flag_of_Nigeria.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/2/25/Templo_Osun3.jpg/220px-Templo_Osun3.jpg wget http://upload.wikimedia.org/wikipedia/commons/thumb/7/79/Flag_of_Nigeria.svg/23px-Flag_of_Nigeria.svg.png wget http://upload.wikimedia.org/wikipedia/en/2/25/Machinesmith.png wget http://upload.wikimedia.org/wikipedia/en/thumb/1/18/Machinesmith.jpg/220px-Machinesmith.jpg wget http://upload.wikimedia.org/wikipedia/commons/thumb/2/24/UK_Chequered_flag.svg/25px-UK_Chequered_flag.svg.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/6/62/Salic_Law.png/280px-Salic_Law.png wget http://upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Wikisource-logo.svg/38px-Wikisource-logo.svg.png wget http://upload.wikimedia.org/wikipedia/en/thumb/a/ae/Devil_Face%2C_Angel_Heart_DVD.jpg/220px-Devil_Face%2C_Angel_Heart_DVD.jpg wget http://upload.wikimedia.org/wikipedia/commons/thumb/9/93/Hong_Kong_film.svg/29px-Hong_Kong_film.svg.png
- http://en.wikipedia.org/wiki/Devil_Face,_Angel_Heart
- http://en.wikipedia.org/wiki/Sulik%C3%B3w
- http://en.wikipedia.org/wiki/Edward_Ellis
- http://en.wikipedia.org/wiki/Salic_law
- http://en.wikipedia.org/wiki/Arthur_Charles_Dobson
- http://en.wikipedia.org/wiki/Machinesmith
- http://en.wikipedia.org/wiki/Osun_State
- http://en.wikipedia.org/wiki/2007_AMNRL_season_results
- http://en.wikipedia.org/wiki/Lajosmizse
- http://en.wikipedia.org/wiki/Sebasti%C3%A3o_Laranjeiras
- http://en.wikipedia.org/wiki/RAMBO
- http://en.wikipedia.org/wiki/Lee_Ming-kwai
- http://en.wikipedia.org/wiki/Prince_Edward_Road
- http://en.wikipedia.org/wiki/Miles_Pe%C3%B1a
- http://en.wikipedia.org/wiki/Andy_Papoulias
- http://en.wikipedia.org/wiki/Yi_Gwal
- http://en.wikipedia.org/wiki/Hell%27s_Kitchen_%28U.S.%29
- http://en.wikipedia.org/wiki/Mahmud_Dramali_Pasha
- http://en.wikipedia.org/wiki/Charlene_Barshefsky
- http://en.wikipedia.org/wiki/K33AG
- http://en.wikipedia.org/wiki/Per_Wikstr%C3%B6m