I got involved in distributed systems work back in the
early '80's with Project Jade at the
At Jade, we built an object-oriented simulation language, Sim++, that had an embedded engine for transparent parallel execution: hundreds of CPUs could be applied to radically shorten the runs of complex models. TimeWarp -- an optimistic time management strategy -- was used to synchronize the CPUs. A network-neutralization layer supported transparent execution across any sized shared memory or distributed memory super-computers and/or clusters of workstations. Very cool stuff!
A lot of the simulations big enough to need parallel execution are used by the military for live virtual world training systems, so Jade was eventually bought by SAIC. Their biggest problem was scaling the infrastructure to support 50,000 and 100,000 entities in a single, realtime virtual world. Most training simulations are more distributed in nature than parallel: data management becomes as big as a problem as time management. I worked for several years in DARPA on scalability approaches for data distribution. Darrin West (another Jade alum) continued on time management and scaling techniques for the simulation software itself (THEMA: more cool stuff!). We ended up as lead architects for the DARPA/SAIC Synthetic Theater of War project. My last DARPA stint was as the principle investigator for cluster computing in large scale distributed training systems (ASTT). Rather than the normal fully distributed system, we left the users distributed, but centralized the models (think the Model/View/Controller pattern), with DSL-level traffic providing realtime connectivity.
similarity of this work to MMP client/server games led me to dump the DoD and move to the
Larry Mellon Bibliography of Distributed Parallel Simulation