Difference between revisions of "Speed and Performance"

From iGeek
Jump to: navigation, search
 
Line 48: Line 48:
 
If you want to know what will matter to you, focus not on abstract subsystem benchmarks and specs, but on real-world results and application benchmarks. The fastest whatever may not matter. I could make a minor improvement to the slowest components, and make a far bigger difference in the real world than I could by dramatically speeding up the fastest components. Computers are about balanced engineering. And this is why most people are so misled by PC specs; they are looking at individual specs and subsystems, without getting how that impacts the whole system/solution. At least now you the reason and will be less easily impressed by misleading specs.
 
If you want to know what will matter to you, focus not on abstract subsystem benchmarks and specs, but on real-world results and application benchmarks. The fastest whatever may not matter. I could make a minor improvement to the slowest components, and make a far bigger difference in the real world than I could by dramatically speeding up the fastest components. Computers are about balanced engineering. And this is why most people are so misled by PC specs; they are looking at individual specs and subsystems, without getting how that impacts the whole system/solution. At least now you the reason and will be less easily impressed by misleading specs.
 
{{Footer| written=1999.03.23|edited=| }}
 
{{Footer| written=1999.03.23|edited=| }}
[[Category:General Tech]][[Category:Tech]]
+
[[Category:General Tech]] [[Category:Tech]] [[Category:New]]
 
</noinclude>
 
</noinclude>

Latest revision as of 16:02, 16 June 2019

SpeedVsPerformance.jpeg

Speed and Performance: how they differ. Many people understand simple specs and performance changes in a subsystem. What they don't understand is those effects on the whole system. Improving a subsystem's speed may have a very small impact on real-world performance. This is why you hear someone claim to increase the speed by 50% or 100%, while in the real world users only see a 5% performance gain. The claims aren't fraudukent, just misleading and people just don't get what the numbers mean.

Step 1

SpeedVsPerformance.1.gif

Let's say I want to analyze the performance of one Application. Let's look at running Photoshop, Opening an Image, doing a filter (or two), saving the file, and quitting. It takes a certain percentage of your time to do each high level thing. Here is how the time component of that might break down:

The total time is your real world performance. If I made opening a file 100 times faster, it may have little (or no) significant difference on your workflow because the majority of your time is spent filtering. This is why there is much more to performance than meets the eye.

Step 2

SpeedVsPerformance.2.gif

Let's keep breaking things down into more and more detail. Users think of steps like "load, open, filter", but computer engineers often think more at the different subsystems. Let's jump into a few of the logical areas (like load and filter) and see how the engineering elements might break down in timing.

From an engineer's point of view, the function of "loading" or "filtering" can break down into the following groups, Processing, Drive (I/O), memory and toolbox (MacOS calls). Of course this breakdown detail is arbitrary. We can keep going until we are looking at what individual instructions are taking the most time in processor, or what area or metric of the drive performance is most important, and so on. But for explanation purposes this high level view is good enough).

Notice that for loading the image, most of the time may be spent doing one thing (like drive access) but in a filter there is likely little drive access, and a lot more time is being spent accessing memory (where the image is stored) and of course processing it.

Different functions have dramatically different usage models in the computer, and they stress different subsystems. This is why improving one subsystem may have no noticeable performance increase if you aren't using those things a lot, if they are not the bottleneck (what is taking most of your time). And seemingly minor improvements can matter a lot, if they happen to be what has been holding you up.

Step 3

SpeedVsPerformance.3.gif

Now let's pretend that someone just doubled the speed of the systems bus (which should improve how long it takes to access memory). Here would be the simulated results.

When doing a load (running something from disk), a 100% improvement on BUS speed may only make a 5% difference in the total time. Memory access was twice as fast (there were no misleading people) but the memory was only one of many things that were going on (and not the costly one in this case).

For speeding up the load part of the function, what they really needed to do is speed up the drive access where a doubling of speed would have been far more noticeable to performance; probably more like a 40% real-world performance increase instead of 5%.

Remember that each of these items can be further broken down. What is drive access? It is the sum of the time it takes the I/O port (say SCSI, IDE or FireWire) to transfer the data, as well as the time it takes the drive to find, access and send the data to the I/O port. Drives themselves have many metrics like access time, transfer rates, latency and so on. All these functions can keep getting broken down further and further, or in different ways depending on what you are trying to measure.


Doubling the memory bus may have had a nominal effect on loading Photoshop (say 5%), but it will have had a far bigger effect (say 15 - 20% improvement) on doing a Photoshop filter. The Photoshop filter is far more likely to be memory bound (waiting for the processor bus) than loading was. So if you are spending all your time doing filters, that memory bus improvement could be a far bigger gain to you.

Step 4

SpeedVsPerformance.4.gif

Now lets really go hog wild. Let us double the performance of the processor and imagine what the real world results would be (looking at the highest level user functions again).

If we double the processor performance we know it will help some things far more than others. The Photoshop filter could see a lot better results in performance since it is more likely to be processor bound. While loading, opening, saving the files will see far fewer gains since they are more disk bound. The total gain for doubling the processor performance may only make a 20 - 25% performance boost overall for these tests, but it really matters what you need to do. If you were spending hours just running filters all day, or doing other things that are very processor intensive, then this performance increase would be far more dramatic. If you were disk bound, then the processor isn't likely to make as big a difference for you.

Conclusion

Of course, the point to all this is that tuning a few of the right things can have a far larger effect than tuning many of the wrong things. The trick in computer performance is not to have better specs, but to have better of a few key specs for what you are doing.

Unless you are an engineer and work on this stuff, you may never know what is the bottleneck is and where your computer spends most of its time. Most of the people babbling about specs or getting excited about them may have no idea of what they mean.

I hope this helps you learn a bit more about speed and performance, and what they mean to you. If you are loading all the time (accessing the disk), then you should really be paying more getting a fast hard drive. If you are spending all your time processing, then the faster processor will pay off. Your gains in each case will be dependent on what you do, and that varies for everyone.

Each of these areas can be broken down more and more. There is a near infinite amount of metrics for measuring performance. Engineers keep doing this to see what areas are the bottlenecks and need the speed improvements. But just because a company radically improves something does not mean that it will effect you at all; though odds are that it will be a big gain for someone!

Remember, there are diminishing returns on speed improvements. If it takes you 50 seconds to do something, and 20 seconds of that is processing, and someone were to double your processor speed you would likely see a nice 20% improvement. But even if they were to offer a processor that was a thousand times faster, you would only see your real-world performance improve by 38%. So a thousand times faster is really only twice the improvement for you, or only 18% better. That's nicer, but not revolutionary. What happened is the bottleneck shifted, and something else starting taking most of the time, and you have to improve all areas at once to see real returns.

This is perhaps the most important thing you can learn from this article. Improving any one area, without improving the others, just moves the slow spots around. Your system (computer) is not as fast as the fastest component, it is only as fast as the slowest one. The bottlenecks and the balance of the whole system is what counts. Many wannabe geeks never learn this; they are so focused on speed improvements in areas that don't matter to what people are trying to do, that they lose sight of the bigger picture; overall performance.

If you want to know what will matter to you, focus not on abstract subsystem benchmarks and specs, but on real-world results and application benchmarks. The fastest whatever may not matter. I could make a minor improvement to the slowest components, and make a far bigger difference in the real world than I could by dramatically speeding up the fastest components. Computers are about balanced engineering. And this is why most people are so misled by PC specs; they are looking at individual specs and subsystems, without getting how that impacts the whole system/solution. At least now you the reason and will be less easily impressed by misleading specs.

Written 1999.03.23