ArtReid said:
Am curious as to what most affects video speed/performance. Would it be
memory interface, memory size or Directx level?
I'm thinking of upgrading my 1Gb 64bit Directx 10 card to a 1Gb 128bit
Directx 11.
Would there any significant performance gain?
http://www.gpureview.com/videocards.php
Find the first card there, then use the "compare" function to find the
second card, so they're presented in the same web page for comparison.
Memory bandwidth = memory_array_width x interface_rate_in_GHz
Compute_performance = number_of_functional_units * clock_speed
Movie_decoding_speed = proportional to clock rate of VPU
3D games use the multiple functional units (the pixel and vertex
programmable shaders). A game plays faster, if more of them work
in parallel for you.
The movie decoding block, decodes only a few popular formats (leaving
other formats un-accelerated). As far as I know, it uses one private
functional unit. Higher end cards tend to feed that logic block,
a faster clock. Sometimes, when buying a graphics card for an HTPC,
you shop by clock rate, in the hopes of getting the most speedup
on the movie decoder. A good movie decoder now, comes very close to
decoding the entire movie by itself.
Using the gpureview comparison page, will be a good start at
comparing these resources. Only the movie decoding thing, won't
be properly documented.
It's possible for a 64 bit card to be faster than a 128 bit card,
if the 64 bit card had GDDR5 memory running at peak clock speed.
That's why, the computed bandwidth value, is a better indicator
when doing quick comparisons.
Another word of warning - on low end cards, the combination of
memory width and speed, can cover a 4:1 range. In other words,
if Newegg had twenty different brands of those video cards for
sale, the weakest one could have only 25% of the memory bandwidth
of the fastest one. Which is why, the info on the GPUreview
page is a good start, but you still have to review each of the
twenty products, to see which ones are crappy. If they don't list
clock rates, you know they're cheating! The GPUreview page may
have a listing of cards for sale, and you can see the variations
possible in that listing.
In the end though, all of this "parameter" analysis is for the birds.
Only a benchmark of a 3D game, provides a good comparison that takes
everything into account. Some games run better on ATI cards, others
on NVidia. The two brands of cards, have a different "balance" between
functional units, so they're not identical. If a card has a lot of
functional units that can't be used, because another stage of
rendering is the bottleneck, then those functional units will be
wasted. And they keep fiddling with the balance from one generation
to the next.
http://www.tomshardware.com/charts
For movie decoding, there is very little to go on. Only a few technical
articles of any value exist. Reviews of movie playback quality and
performance, are few and far between. Generally, a pure software
decoder running on the CPU, can do the best job (because it offers
an opportunity to patch any mistakes in a flexible way), but you
don't always have the luxury of a super-powerful CPU for such
movie playback. Some HTPCs have weak CPUs, so a pure software
decoder isn't always an option. Then, the movie decoder properties
of the block in the GPU, are more important.
Paul