Summary of the worlds largest supercomputer.
Folding@Home is essentially the worlds largest supercomputer, but its no ordinary super computer.
Folding@Home is essentially the worlds largest supercomputer, but its no ordinary super computer. F@H is a distributed network of mostly normal computers that process so called WU's or work units. Depending on your computer the WU's you get assigned are more or less complex. In the end we put these puzzle pieces together and get the big picture.
What do you do at Folding@Home?
There are multiple things that I'm currently working on, but that's also the case for most of us that contribute to the project. My main interests at the moment are the translation, documentation and security surrounding the F@H ecosystem.
Why do you call it the F@H ecosystem?
Good question, but to me F@H has become so big that it's really hard to pin it down. There are the F@H cores who run the simulations, the F@H client who handles the WU's and more fancy stuff, but also just the universities who either contribute to F@H or use the power of F@H to run their simulations.
The problems that we face
F@H is unlike any other supercomputer. Usually a supercomputer is controlled by a single entity, has a controlled physical residence or simply is already pretty reliable. F@H is really none of that. F@H is made up of tons of individual computers from all over the world and we have no direct control in that sense over these computers. Validating results, securing you the user and securing us as F@H is therefore quite a challenge.
A different problem is also all the spam that we get from people who assune, that we pay them for their contributions to F@H.
Documentation madness
As I've already said before, F@H is a giant project. I noticed, that documentation is rather difficult under these circumstances. Besides, there are many ways to document something and how detailed you want your documentation to be. While documenting many things for myself (it's something that I chose doing myself and me being the only contributor to my own documentation obviously), I encountered many problems. Those being:
1. C++
2. C++ code that's about protein dynamics, which I have no experience in
3. Partially code comments from people who write C++ about protein dynamics
Forgive me for quickly ranting about C++ here. Anyways, let me get more into my documentation efforts, since they might be very unusual.
The most confusing part being, that I was documenting GROMACS and OpenMM. Those being the "cores" we use to actually run the simulations. Simply put I wanted to understand how they work, how they are assembled etc.
While that effort was partially motivated by my own curiosity and to eventually find bottlenecks, I also did it for my love to infosec. I'm also aware that there are documentations for both projects out there, they just didn't satisfy my interest.
Validating the results
Making sure that the results from us are correct is one of our highest priorities. There are many ways in which we validate the results and check for integrity. Unfortunately some ways "we" check for integrity and validate results are either unknown to me or I'm not really allowed to talk about them. Talking about these in detail would probably require an article for itself anyways.
Summary
This was probably the least infosec article ever published and most likely will be for a long time. I just wanted to get some stuff out for now and let people know, that we're still working hard as ever. There are still many things ahead of us and we'll be there. Also we're security researcher friendly, so you don't need to worry about writing us in case you think you found a vulnerability.
Last but not least, I want to mention that I'm not part of the core team or even part of the core devs. I simply attend the meetings that we have, but there are also meetings which I don't know of, or that contain too much sensitive information for me.
This weeks images were provided by Treedeo from the "My Space" series of acrylic colour.