EldarOfSuburbia
New member
- Feb 8, 2014
- 4,032
- 0
tl;dr This is my own personal theory why Ghostbusters consumes a ton of CPU, and a possible, simple, solution. It's pure speculation and if it turns out to be wrong, so be it; this thread will go to the graveyard with "IT WAS WRONG" on the tombstone.
We've had Ghostbusters for a week, and while most who can play the table on their platform of choice will agree that it's a great table, a lot of other folks are having difficulties getting it running at all, and even when they do, it runs like a dog - and I'm not talking a racing greyhound here; more like a well-fed St Bernard that's just got up from a nap.
Why might this be? Well, one thing that has been noted is that the table itself consumes a lot of CPU resources.
What is causing this? A simple explanation is that the code that is being executed is running one (or more) tight loops that don't have any pauses built in. Apologies in advance if you're not one for coding, but consider these two simple programs:
I mean, that doesn't seem like much, does it? Introduce a millisecond pause! That's nothing.
Well, on my PC that millisecond pause is the difference between that program consuming < 1% of CPU (with the pause), and consuming 10-12% CPU (without it)!
This is a basic example of code that might be considered to be "badly optimized". But, code that is badly optimized on one platform, might be perfectly fine on another platform.
Consider Stern's current pinball platform, SPIKE, which runs under the hood in Ghostbusters. It consists of a main "node" with the primary CPU that runs the core game code, and parcels out other responsibilities to more specialized satellite nodes. This main node is running a version of Linux on a relatively low-speed ARM processor. The only thing it is running, besides the operating system, is the game code. So there's no problem if the game code consumes 100% of the CPU - that's all it's ever going to do.
Compare that with your PC or your phone or tablet, which is running a whole bunch of stuff besides the Ghostbusters game code. For a start, it's running TPA/SPA, and it's running its own OS (Windows, iOS, Android) and all that entails, with a bunch of background apps and services.
Another thing that a real Ghostbusters table isn't running is a simulation of the laws of physics. This is where, maybe, the Ghostbusters code could be "optimized" for TPA/SPA.
Real-world physics doesn't run on CPU cycles. It's analog, always running, no interrupts, no sleep cycles. The code in a real Ghostbusters table needs to be able to react to real-world physical events as soon as they happen, and those events aren't governed by software collision detection. The code needs to react as quickly as possible, and this means running any loops in as tight a cycle as the CPU will allow: no sleeping allowed. Ye cannae change the Laws of Physics, Jim, and the Laws of Physics don't sleep for a millisecond every time round the loop!
In the simulated world, that luxury exists. You could bake that millisecond (or shorter, depending on the platform/language) pause into the code every cycle, it would give the CPU time to do other stuff, and performance would improve dramatically.
We've had Ghostbusters for a week, and while most who can play the table on their platform of choice will agree that it's a great table, a lot of other folks are having difficulties getting it running at all, and even when they do, it runs like a dog - and I'm not talking a racing greyhound here; more like a well-fed St Bernard that's just got up from a nap.
Why might this be? Well, one thing that has been noted is that the table itself consumes a lot of CPU resources.
What is causing this? A simple explanation is that the code that is being executed is running one (or more) tight loops that don't have any pauses built in. Apologies in advance if you're not one for coding, but consider these two simple programs:
Code:
/* Example 1 : No sleep */
while true {
/* do stuff */
}
Code:
/* Example 2: Sleep */
while true {
/* do stuff */
sleep(1); /* pause execution for 1 millisecond */
}
I mean, that doesn't seem like much, does it? Introduce a millisecond pause! That's nothing.
Well, on my PC that millisecond pause is the difference between that program consuming < 1% of CPU (with the pause), and consuming 10-12% CPU (without it)!
This is a basic example of code that might be considered to be "badly optimized". But, code that is badly optimized on one platform, might be perfectly fine on another platform.
Consider Stern's current pinball platform, SPIKE, which runs under the hood in Ghostbusters. It consists of a main "node" with the primary CPU that runs the core game code, and parcels out other responsibilities to more specialized satellite nodes. This main node is running a version of Linux on a relatively low-speed ARM processor. The only thing it is running, besides the operating system, is the game code. So there's no problem if the game code consumes 100% of the CPU - that's all it's ever going to do.
Compare that with your PC or your phone or tablet, which is running a whole bunch of stuff besides the Ghostbusters game code. For a start, it's running TPA/SPA, and it's running its own OS (Windows, iOS, Android) and all that entails, with a bunch of background apps and services.
Another thing that a real Ghostbusters table isn't running is a simulation of the laws of physics. This is where, maybe, the Ghostbusters code could be "optimized" for TPA/SPA.
Real-world physics doesn't run on CPU cycles. It's analog, always running, no interrupts, no sleep cycles. The code in a real Ghostbusters table needs to be able to react to real-world physical events as soon as they happen, and those events aren't governed by software collision detection. The code needs to react as quickly as possible, and this means running any loops in as tight a cycle as the CPU will allow: no sleeping allowed. Ye cannae change the Laws of Physics, Jim, and the Laws of Physics don't sleep for a millisecond every time round the loop!
In the simulated world, that luxury exists. You could bake that millisecond (or shorter, depending on the platform/language) pause into the code every cycle, it would give the CPU time to do other stuff, and performance would improve dramatically.