I'm a programmer. I code in C++, C#, HTML5, and PHP. There are many graphics engines I have at my disposal. The question is: Does there exist a graphics engine that is as true to our reality as possible given our current understanding of physics? For instance, I can easily create macroscopic objects in a 3D space but what about all of the elements in reality that make up these macroscopic objects? What if for instance I wanted to start from the bottom up, creating simulations at the planck scale, particles, atomic structures, cells, microbiology, etc.? What if I want to simulate quantum mechanics? Of course I can make a model of an atom but it winds up not being correct in terms of being exactly analogous to real life.
I would like to correctly simulate these structures and their behaviors. Assume also that I have access to an immense amount of parallel computing processing power (I do). Perhaps some non-gaming scientific graphics engines exist that I'm not aware of.
Answer
As a chemist turned engineer, I think I am well placed to answer this question.
Does there exist a graphics engine that is as true to our reality as possible given our current understanding of physics?
Given appropriate constraints and simplifications, it is possible to build a useful model from simple elements. Whether you consider this "true to reality" is open to interpretation.
There are three main simulation areas that spring to mind in mechanical engineering: finite element analysis, structural analysis, and computational flow dynamics.
Finite element analysis means taking a solid body and breaking it down into tetrahedral or cubic elements and applying the laws of physics (normally just stress-strain relationship) to each one. To get a really good result on a relatively small object, you're going to need about 1000 elements in each of 3 dimensions, so that's a gigabyte of memory assuming one byte per element (in reality each element will need at least ten times that to store, as a minimum, its 6 degrees of translational and rotational freedom.) This is already looking like a memory issue for regular PC. By increasing the mesh size a bit, we can run such a simulation on a PC, but it may take several hours for the effects of even a static load to propagate through the model to convergence. Modeling oscillation (time + 3 spatial dimensions) is pretty much impossible on a PC, both in terms of the time taken and the amount of data generated (several gigabytes per timestep.) Reducing to time + 2 spatial dimensions helps a lot.
In order to make the calculations reasonable, civil and structural engineers use a simplifictation to perform structural analysis. Programs like Staad Pro work with elements such as beams and columns, assuming they will bend according to known models. The engineer builds meccano-like model for the program input, specifying the nodes where the beams connect, indicating whether the joint is fixed or free rotation is possible, etc. In this way, full four dimensional (time+3 space) analysis is possible.
Computational fluid dynamics is the equivalent of finite element analysis, but for fluids rather than solids. Again we use a mesh of cubes or tetrahedra to represent the volume, but there are different issues. This is the type of simulation I have personal experience of, using Floworks software, which uses a cubic mesh and very usefully allows you to reduce the mesh scale to a half or a quarter of the main mesh in critical areas. Nevertheless, the experience has led me to believe that you can predict anything you want with computational fluid dynamics software. I see it as a useful qualitative tool for identifying problem areas, rather than a means of quantitatively predicting pressure drop vs velocity.
Again we need about a billion elements for a really good simulation of a small object in 3 spatial dimensions, with at least pressure and three degrees of freedom in velocity for each element. Again, predicting flow in three spatial dimensions plus time uses excessive computing power. But unfortunately in the case of computational flow dynamics, a system with stable inlet and outlet flow can very likely to have an oscillation somewhere inside the model, which is not the case of finite element analysis under constant load. Sometimes we can simplify to 2 spatial dimensions. A 2 spatial dimension + time analysis of a cross section of a chimeney with wind blowing across it can be done, and it may reveal that the system oscillates due to vortex shedding, which apart from the cyclic stress placed on the chimeney, results in greater drag than would be seen with a time-averaged model. The equations used are called the navier-stokes equations, and though very simple in concept, can lead to surprisingly complex results if turbulence ensues (google Reynolds number for more info.) It isn't really possible to extend the calculation into 4 dimensions on a PC, so approximations have to be made to account for the effect of turbulence.
In my field (combustion and heat transfer) the burner manufacturers introduce some simple combustion thermochemistry into their models. That adds another level of complexity which means a pretty powerful computer is needed.
So good luck, go ahead and perform a computational fluid dynamics simulation with a 1000x1000x1000 mesh for 1000 timesteps and you will generate several terabytes of data. Don't forget that each iteration will need to converge properly before you continue to the next timestep. Interpreting all that data is another issue. Do this every day for a year and you will have several petabytes. Do you have that much storage? You will quickly see why engineers prefer to use the relationship between Reynolds number and Friction factor rather than use the Navier-Stokes equations to calculate everything from first principles.
What if for instance I wanted to start from the bottom up, creating simulations at the planck scale, particles, atomic structures, cells, microbiology, etc.?
Whoa! that really is a lot of computing power. A molecule of haemoglobin weighs about 64000 daltons (about the same as 64000 hydrogen atoms.) A dalton is 1.66E-24 g, the reciprocal of Avogradro's number. haemoglobin is an interesting protein because it has four separate binding sites for oxygen, and binding of one oxygen causes a change in conformation than enhances the binding strength of the others; it's a kind of natural molecular machine (which is independent of the rather more complex molecular machines like ribosomes and cell membranes.)
I don't have the exact molecular formula for Haemoglobin handy, but let's make some assumptions. Let's assume the number of protons and neutrons is equal. That means there are 32000 protons and 32000 electrons (I'm not really interested in the protons, we'll stay away from nuclear physics, but it's the best way to get an idea of the number of electrons. The average atomic mass will be somewhat similar to glucose: about 7.5. In round figures, let's say there are 10000 atoms. In addition, proteins keep their shape due to being surrounded by solvent, so let's say we need to multiply both those numbers by 10: thats 3200000 electrons and 100000 atoms.
Now you might hope to make some simplifications regarding the charge interactions and be able to predict the shape of the molecule, and maybe even the binding of O2 (though that is rather dependent on the iron atom where the O2 is bound, so you migh prefer to rely on known data for that.) This type of thing is indeed done and is one way of trying to find suitable drug molecules that bind to receptors. But when I was in the industry 15 years ago, it was far more fashionable to use automation to physically synthesize and screen vast numbers of potential drug substances.
Actually trying to make a quantum model of this from first principles would be vastly more complex, not least because quantum mechanics is statistical, so it would probably require a Monte Carlo computational approach: you might have to consider hundreds of molecules. Indeed in the comments, @lemon states that 1E4 molecules is about as far as he/she can go, and probably with substances a lot simpler than hemoglobin. I think it would be an achievement just to accurately model the binding site containing the iron with quantum mechanics.
To put this in perspective, let's see how many atoms there are in a single bit of memory. according to wikipedia, 128GByte is now possible, which is 1 terabit (this is DRAM so these are capacitors, not transistors.) Let's assume the die weighs 0.028g and therefore contains a millimole of silicon. So we have 6.28E23atoms/mol * 0.001mol=6.28E20 atoms to store 1E12 bits. That's 1E8 atoms per bit. Once you start thinking about the number of bits you need to represent an electron, you realise you need megatonnes of silicon just to do the most basic simulations of a milligram of matter.
No comments:
Post a Comment