MSI Radeon 6850 Cyclone Power review

Deep Dey
By -Deep Dey

Despite the Radeon HD 6900 series was just released, for this editor one of the more exciting product releases this year has been AMD's Radeon 6800 series. The Radeon HD 6850 and 6870 have had a warm welcome already in the mid-range gaming segment of graphics cards.
ODMs know this and as such they just love to pimp up products that people like, MSI submitted a Radeon HD 6850 for a test here at tech2in.com and as such we'd be more than happy to bring you a full review on one of their newest products today, the Cyclone POWER edition of the Radeon HD 6850.

The new mid-range series 6800 delivers a very decent chunk of gaming performance, a set of new features and, as always, will carry a friendly 169 EUR price tag.
The product is based on MSI's military class component selection including the new SFC (Super Ferrite) chokes. The card looks robust and well built. It has 6 phases available on the GPU area and another phase for its one gigabyte of memory.
The card comes factory clocked for you at a nice 860 MHz on the core and 4400 MHz on the memory (affective data rate). Now, perhaps you guys can remember it, early reports have shown that this card was able to achieve 1000 MHz on the core frequency, based on a core voltage tweak of course. Sitting on top of the GPU is the known Cyclone cooler.

The MSI R6850 Cyclone Power Edition has four connectors, two DVI connectors, one HDMI and of course a display port connector.
Let's have a quick peek at the new duo and then head onwards into the review, where we'll cover the features, architecture, photo-shoot, power consumption and heat levels and benchmarks... oh yeah... you are in for the entire tech2in treat today. Next page you guys.

MSI R6850 Cyclone Power Edition

Let's meet Bart ...

Both cards released in the Radeon HD 6800 range are based on what is now know as the 'Barts' codenamed GPU. The cards deriving from it will in fact be segmented into mid-range, not high-end as many people expected. So yeah, the best has still to come. The reason? AMD needed to clean up some numbering as next year a lot of new products in the fusion line will eat away the lower segment of the numbering scheme.
The Radeon HD 6800 will segment itself be in the mid-range market, whereas the Radeon HD 5800 series will be updated with the Radeon HD 6900 (Cayman) series, with a dual-GPU part named Radeon HD 6900 (Antilles), I can't say it often enough to make this clear I guess.
So that is certainly something to think about, as the naming scheme has changed. Also, why did AMD start with the Radeon HD 6850 and 6870 and not 6900 you might ask. Well, obviously it's easier for AMD to release a slightly lower spec GPU but more importantly, currently the only product that really bothers AMD is NVIDIA's lovely GeForce GTX 460 (GF104). This is a very successful product for which AMD does not have a good answer, as such there's a gap in their product line-up and that gap needs to be filled up first.
Within it's own lineup the Radeon HD 6850 will be faster than the Radeon HD 5830 and the Radeon HD 6870 will be faster than the Radeon HD 5850, but not faster than the 5870 .  The Radeon HD 6870 will be a pure reference product, while the Radeon HD 6850 comes in many custom designs and boards. In the end it will be all about pricing of course.
Let's have a quick comparative overview of some of the specifications representing a certain cope of mid-range performance reference products


Specifications Radeon HD 5770 Radeon HD 5850 Radeon HD 6850 Radeon HD 6870
GPU Juniper XT Cypress Pro Barts Pro Barts XT
Manufact. tech. 40nm 40 nm 40nm 40nm
GPU frequency 850 MHz 725 MHz 775 MHz 900 MHz
Stream processors 800
1440 960 1120
Memory frequency 4800 MHz 4000 MHz 4000 MHz 4200 MHz
Memory bus
 
128-bit 256-bit 256-bit 256-bit
Memory buffer
 
1 GB GDDR5 1 GB GDDR5 1 GB GDDR5 1 GB GDDR5
Power consumption
 
108 watt 151 watt ~ 127 watt  >150 watt
Performance
 
- - Faster than 5830 Faster than 5850


So each of the two R6800 SKUs have a separate codename, the R6850 being Barts Pro and the 6870 being the Barts XT. We'll leave the entire codename thing for what it is now and move a little onward into the architecture.
Both cards are of course updated DX11 class products with a couple of new features. Armed with 1.7 billion transistors the Radeon HD 6850 is pitted against the competition's GeForce GTX 460 768MB model, the 6850 which is clocked at 775 MHz on it's core and shader processors domain and comes with a full GB of graphics memory. This memory is 256-bit and running at an effective data-rate of 4000 MHz.
The product has 12 SIMD clusters, 48 texture units, 960 shader processors, 32 ROPs and a TDP of  127W, 19W in idle. Despite the memory bus, you can recognize similarities close to the R5770 here. This product however will bring you 1.5 TFLOPS of performance, and combined with that 256-bit gDDR5 memory it will leave the R5770 far behind it.
Pitted against the GeForce GTX 460 1GB version is the Radeon HD 6870, and this is going to be the more interesting product for most of tech2in's readers we suspect. As you noticed the overall shader count and clock frequencies are higher than earlier leaked rumors, that applies to the R6870 as well.
The 6870 is clocked a good chunk faster at a nice 900 MHz, and also comes with a full GB of graphics memory. This memory is 256-bit and running at an effective data-rate of 4200 MHz with a peak bandwidth at 134 GB/sec. The product has 14 SIMD clusters, 56 texture units, 1120 shader processors, 32 ROPs and a TDP slightly above 150W with a 19W idle TDP. This product will bring you 2.0 TFLOPS of performance. This means that performance wise the R6870 will sit in-between a R5850 and 5870.
Features wise both graphics cards will be very similar to the last generation products and are merely an advanced update. However there are some new features, like DisplayPort now follows 1.2 interface specification, HD3D, UVD3 and HDMI 1.4a are introduced. We also spot a new Anti-aliasing mode (Morphological AA), better Anisotropic filtering and improved Tessellation performance - up to twice the performance of that of the 5000 series.


So its time to have a closer look at the retail product. First off, above is the packaging so you know what to look for in the stores, and well .. just to satisfy your curiosity of course ;)
MSI R6850 Cyclone Power Edition

Above you can spot the package bundle. That card is custom alright, heatpipe cooling based and overall silent, but we'll show you that in out tests of course. Bundled are the basics like drivers and card power cables.
Being the Cyclone Power Edition the card is factory clocked at 860 MHz on the GPU for you, memory is running at a faster effective data-rate of 4400 MHz as well.

The card is non-reference obviously, and comes with a DisplayPort 1.2 connector, one HDMI 1.4a connector and two DVI connectors of which one is single and the other is dual-link.


AMD HD3D

AMD HD3D is all new - it is comparable to NVIDIA's 3D Vision and thus brings 3D display support for games, movies and videos to the AMD Radeon lineup of graphics cards. This of course will require separate 3D goggles. HD3D includes support for Blu-ray in 3D third-party 3D applications.
The new implementation is very tricky and in its default setup, far far away from what NVIDIA's does (and I do mean that in a negative way). Now, for 3D Blu-ray support you will only need to have the right hardware (glasses, graphics card, 3D TV/monitor HDMI 1.4a cable) and a software playback solution supporting it.
3D Gaming wise... AMD waved everything away, and that's where we think AMD HD3D is going to fail.
There are no kits, there is no real driver support within the Catalyst drivers. You can play games in 3D yet you'll need to actively get support for this yourself. Meaning, AMD handed out 3D game support to 3rd party vendors. To get 3D Game support you must buy software from a company like DDD (Dynamic Digital Depth), the software implementation costs 50 USD (temporary priced at 25 USD), this will allow you to play games in 3D. Unfortunately these known methods in the past always have proven to be a little icky with lack of native game support. Next to that you'll need to seek and purchase a 3D Monitor with goggles and/or find a kit that provides these.
We're sure that the actual 3rd party vendors really like this as it will boost their sales, we however doubt very much the end-user will share that sentiment as well.


Universal Video Decoder 3.0

UVD, short for Universal Video Decoder, a synonym to the video processors embedded into the GPU of the graphics card. With proper 3rd party software like WinDVD or PowerDVD or the free Media Player Classic you can enable support for UVD which provides hardware acceleration for media content like MPEG2, H.264 and VC-1 high definition video formats used by Blu-ray.
In short, this feature allows the video processor in the GPU to apply hardware acceleration and video processing functions while keeping power consumption & CPU utilization low on your movies and video's.
That means a low CPU utilization whilst scoring maximum image quality. Over the years, this engine has advanced and while not massively different opposed to the older UVD engines, we do see some new tweaks. Dual stream decoding was already introduced in UVD2. So for example, if you playback a Blu-ray movie and simultaneously want to see a director's commentary (guided by video) you can now look at both the movie and in a smaller screen see the additional content (like picture-in-picture). Obviously this is Blu-ray 2.0 compatibility here, and the additional content is an actual feature of the movie. But definitely fun to see.
UVD 3.0 allows for
  • Hardware acceleration decode of two 1080P HD streams
  • Compatible with Windows Aero mode - playback of HD videos while Aero remains enabled
  • Video gamma - independent gamma control from Windows desktop.
  • Brighter whites - Blue Stretch processing increases the blue value of white colors for bright videos
  • Dynamic Video Range - Controls levels of black and white during playback
Radeon HD 6850 and 6870

Dynamic Contrast Enhancement will improve the contrast ratios in videos in real-time on the fly. It's a bit of a trivial thing to do, as there are certain situations where you do not want your contrast increased.
Another feature is Dynamic Color Enhancement. It's pretty much a color tone enhancement feature and will slightly enforce a color correction where it's needed. We'll show you that in a bit as I quite like this feature; it makes certain aspects of a movie a little more vivid.
New in UVD3 is now managing entropy and bit stream support for MPEG2 and MPEG4 DiVX/xVID movies, and also there is of course hardware support for Blu-Ray 3D's multi-view codec. Have a peek at the above block diagram demonstrating that.
To be able to playback high-def content you'll still need software like WinDVD or PowerDVD, a HD source (Blu-ray player) and a HDCP capable monitor or television.
For those interested in MKV/x.264 GPU based content acceleration, playback and image quality enhancements. We spotted this lovely little free application to manage this.


Bitstreaming audio
Directly tied to the UVD3 engine is obviously also sound. AMD's Radeon series 3000, 4000, 5000 and now 6000 cards can pass lossless sound directly through the HDMI connector. This has been upgraded as it's now possible to have 7.1 channel, lossless sound at 192kHz / 24-bit. The HDMI audio output follows HDMI standard 1.4a and supports Dolby True HD and DTS-HD audio. Obviously there is also support for standard PCM, AC-3 and DTS. HDMI 1.4a allows bitrates up to 65Mbps and 3DTV.
So with an AMD Radeon HD 6800 Series video card, all you need to do is install the card into your motherboard and connect it to your receiver with an HDMI cable. As the AMD Radeon HD 6870 or AMD Radeon HD 6850 card removes the need for a separate sound card.
Requirements
  • Playback software , say CyberLink’s PowerDVD 9 or newer
  • AV receiver that supports Blu-ray player support Dolby TrueHD / DTS-HD Master Audio
    (HDMI v1.3 compliant)
  • Two HDMI cables (male to male connectors, rated at 225MHz or higher)
  • Appropriate speaker cables for your surround sound speaker system
HDMI 1.4a - HDMI has been updated to the latest version 1.4a, which was released last March. 1.4a adds support for the two mandatory 3-D formats (Side-by-Side Horizontal & Top-and-Bottom), which was dropped from the original 1.4 version. So supported are all the new 3D TVs and for the geeks, the new HDMI Frame packing format.


atching 1080P DXVA videos post-processed by your GPU


The x.264 format is often a synonym with Matroska MKV, a media file container which often embeds that x.264 content, a much admired container format for media files. Especially the 1920x1080P movies often have some form of h.264 encoding dropped within the x.264 format. As a result, you'll need a very beefy PC with powerful processor to be able to playback such movies, error free without frames dropping and nasty stutters as PowerDVD or other PureVideo HD supporting software by itself will not support it.
Any popular file-format (XVID/DIVX/MPEG2/MPEG4/h.264/MKV/VC1/AVC) movie can be played on this little piece of software, without the need to install codecs and filters, and where it can, it will DXVA enable the playback. DXVA is short for Direct X Video Acceleration, and as you can tell from those four words alone, it'll try wherever it can to accelerate content over the GPU, offloading the CPU. Which is what we are after.
There's more to this software though:
  • A much missed feature with NVIDIA's PureVideo and ATI's UVD is the lack of a very simple function, yet massively important, pixel (image) sharpening.
If you watch a movie on a regular monitor, Purevideo playback is brilliant. But if you display the movie on a larger HD TV, you'll quickly wish you could enable little extras like sharpening. I remember GeForce series 7 having this native supported from within the Forceware drivers. After GeForce series 8 was released, that feature was stripped away, and to date it has to be the most missed HTPC feature ever.
Media Player Classic has yet another advantage, as not only it tries to enable DXVA where possible through the video processor, it also can utilize the shader processors of your graphics cards and use it to post-process content.  A lot of shaders (small pieces of pixel shader code) can be executed within the GPU to enhance the image quality. MCP has this feature built in, you can even select several shaders like image sharpening, de-interlacing, combine them and thus run multiple shaders (enhancement) simultaneously. Fantastic features for high quality content playback.Radeon HD 6850 and 6870
Here you see (right) MPC HT edition accelerating an x.264 version of a movie @ 1080P. Mind you that the one spike in CPU cycles is me starting up the actual capture software.
The Radeon 6000 series will completely accelerate (DXVA) this movie without any issues. Complex Image sharpening is handled by the shader processors and we have PC 0-255 Color profile activated over the shaders as well to get nicer black levels. Even if we expand this window to a resolution of 2560x1600 the CPU load will remain low and the graphics card manages that resolution fine.
Resource:

The GPU is doing all the work as you can see, the h.264 content within the x.264 file container is not even a slight bit accelerated over the CPU. 

AMD Accelerated Parallel Processing (APP) Technology

In the current day and age, there is more to graphics cards than just playing games. More and more non-gaming related features can and are being offloaded to the GPU. ATI at first introduced ATI Stream, this is now renamed to AMD Accelerate. This is a software layer that allows software developers to 'speak' with the GPU and have it process data using your graphics card. This really is the most simple & basic description I can give it.
Currently AMD simply follows and believes strongly in open standards as OpenCL or, for the easiest path to add compute capabilities, Microsoft's DirectX 11 DirectCompute. OpenCL is what AMD believes in the most and allows any developer to use code that scales well on both CPUs and GPUs.
To make things a little more clear for the end user, AMD Accelerated is used in software like Cyberlink MediaShow and power director, ArcSoft MediaConverter 4, SimHD (upscaling, H.264 encoding), Total media Theatre (HW accelerated MPEG4/MVC ), Roxio Creator 2010, Adobe Photoshop CS4 and so on... where the GPU assists the software in certain functions, offloading the processor.
Of course among it also falls... folding...

Folding@Home using the ATI Radeon series 6000 GPUFolding at home is a project where you can have your GPU or CPU (when the PC is not used) help out solving diseases which involves protein folding. Over the past years, a lot of progress has been made between the two parties involved. And right now there is a GPU folding client available that works with Radeon series graphics processors. It is ATI Accelerate based, meaning that all Stream/accelerate ready GPUs can start folding.
tech2in team is ranking in the Folding@Home top 70, yes... I'm very proud of our guys crunching these numbers, especially since there are tens of thousands of other teams. The client is out, if possible please join team tech2inand let's fold away some nasty stuff. The good thing is, you won't even notice that it's running.



Graphics card temperatures

So here we'll have a look at GPU temperatures. First up, IDLE (desktop) temperatures.
IDLE temperatures are always a tiny bit tricky to read out as Windows Aero could have an affect on this. Overall anything below 50 Degrees C is considered okay, anything below 40 Degrees C is very nice. We threw in some cards at random that we recently have tested in the above chart.
But what happens when we are gaming? We fire off an intense game-like application at the graphics cards and measure the highest temperature at the GPU. 

So with the cards fully stressed we kept monitoring temperatures and noted down the GPU temperature that got the most hot
  • The temperature under heavy game stress for the R6850 load stabilized at roughly 55 Degrees C only.
With today's graphics cards, please make sure your PC is well ventilated at all times, this will seriously help you on the overall GPU temperatures.


Noise Levels

When graphics cards produce a lot of heat, usually that heat needs to be transported away from the hot core as fast as possible. Often you'll see massive active fan solutions that can indeed get rid of the heat, yet all the fans these days make the PC a noisy son of a gun. Do remember that the test we do is extremely subjective. We bought a certified dBA meter and will start measuring how many dBA originate from the PC. Why is this subjective you ask? Well, there is always noise in the background, from the streets, from the HD, PSU fan etc etc, so this is by a mile or two an imprecise measurement. You could only achieve objective measurement in a sound test chamber.
The human hearing system has different sensitivities at different frequencies. This means that the perception of noise is not at all equal at every frequency. Noise with significant measured levels (in dB) at high or low frequencies will not be as annoying as it would be when its energy is concentrated in the middle frequencies. In other words, the measured noise levels in dB will not reflect the actual human perception of the loudness of the noise. That's why we measure the dBA level. A specific circuit is added to the sound level meter to correct its reading in regard to this concept. This reading is the noise level in dBA. The letter A is added to indicate the correction that was made in the measurement. Frequencies below 1kHz and above 6kHz are attenuated, whereas frequencies between 1kHz and 6kHz are amplified by the A weighting.

TYPICAL SOUND LEVELS
Jet takeoff (200 feet)120 dBA
Construction Site110 dBA Intolerable
Shout (5 feet)100 dBA
Heavy truck (50 feet) 90 dBA Very noisy
Urban street 80 dBA
Automobile interior 70 dBA Noisy
Normal conversation (3 feet) 60 dBA
Office, classroom 50 dBA Moderate
Living room 40 dBA
Bedroom at night 30 dBA Quiet
Broadcast sstudio 20 dBA
Rustling leaves 10 dBA Barely audible
There's a lot of differences in measurements amongst websites. Some even place the dBA meter 10cm away from the card. Considering that's not where you ear is located, we do it our way.
For each dBA test we close the PC/chassis and move the dBA gun 75 cm away from the PC. Roughly the same proximity you'll have from a PC in a real-world situation.
For the card in IDLE we measure 37 DBa which can't be heard really. And fully stressed... 41 DBA, a perfectly fine noise level.




Graphics card temperatures

So here we'll have a look at GPU temperatures. First up, IDLE (desktop) temperatures.
IDLE temperatures are always a tiny bit tricky to read out as Windows Aero could have an affect on this. Overall anything below 50 Degrees C is considered okay, anything below 40 Degrees C is very nice. We threw in some cards at random that we recently have tested in the above chart.
But what happens when we are gaming? We fire off an intense game-like application at the graphics cards and measure the highest temperature at the GPU. 

So with the cards fully stressed we kept monitoring temperatures and noted down the GPU temperature that got the most hot
  • The temperature under heavy game stress for the R6850 load stabilized at roughly 55 Degrees C only.
With today's graphics cards, please make sure your PC is well ventilated at all times, this will seriously help you on the overall GPU temperatures.

Test Environment & Equipment

Here is where we begin the benchmark portion of this article, but first let me show you our test system plus the software we used.
Mainboard
eVGA X58 Classified

Processor
Core i7 965 @ 3750 MHz
Graphics Cards
MSI Radeon HD 6850 Cyclone power Edition
Memory
6144 MB (3x 2048 MB) DDR3 Corsair @ 1500 MHz
Power Supply Unit
1200 Watt
Monitor
Dell 3007WFP - up to 2560x1600
OS related software
Windows 7 RTM 64-bit
DirectX 9/10/11 End User Runtime (latest available)
ATI Catalyst 10.12
NVIDIA GeForce series 260 driver <> 263.09 for GTX 500 series
Software benchmark suite
  • Battlefield Bad Company 2
  • Colin McRae Dirt 2
  • Far Cry 2
  • Call of Duty: Modern Warfare 2
  • Crysis WARHEAD
  • Anno 1404
  • 3DMark Vantage
  • 3DMark 11
  • Metro
A word about 'FPS'
What are we looking for in gaming, performance wise? First off, obviously tech2in tends to think that all games should be played at the best image quality (IQ) possible. There's a dilemma though, IQ often interferes with the performance of a graphics card. We measure this in FPS, the number of frames a graphics card can render per second, the higher it is the more fluently your game will display itself.
A game's frames per second (FPS) is a measured average of a series of tests. That test is often a time demo, a recorded part of the game which is a 1:1 representation of the actual game and its gameplay experience. After forcing the same image quality settings; this time-demo is then used for all graphics cards so that the actual measuring is as objective as can be.
Frames per second
Gameplay
<30 FPS
very limited gameplay
30-40 FPS
average yet very playable
40-60 FPS
good gameplay
>60 FPS
best possible gameplay
  • So if a graphics card barely manages less than 30 FPS, then the game is not very playable, we want to avoid that at all cost.
  • With 30 FPS up-to roughly 40 FPS you'll be very able to play the game with perhaps a tiny stutter at certain graphically intensive parts. Overall a very enjoyable experience. Match the best possible resolution to this result and you'll have the best possible rendering quality versus resolution, hey you want both of them to be as high as possible.
  • When a graphics card is doing 60 FPS on average or higher then you can rest assured that the game will likely play extremely smoothly at every point in the game, turn on every possible in-game IQ setting.
  • Over 100 FPS? You either have a MONSTER graphics card or a very old game.
Setup your monitor first
Before playing games, setting up your monitor's contrast & brightness levels is a very important thing to do. I realized recently that a lot of you guys have set up your monitor improperly. How do we know this? Because we receive a couple of emails every now and then telling us that a reader can't distinguish between the benchmark charts (colors) in our reviews. We realized, if that happens, your monitor is not properly set up.
This simple test pattern is evenly spaced from 0 to 255 brightness levels, with no profile embedded. If your monitor is correctly set up, you should be able to distinguish each step, and each step should be visually distinct from its neighbors by the same amount. Also, the dark-end step differences should be about the same as the light-end step differences. Finally, the first step should be completely black.


DX11: 3DMark 11

3DMark 11 is the latest version of what probably is the most popular graphics card benchmark series. Designed to measure your PC’s gaming performance 3DMark 11 makes extensive use of all the new features in DirectX 11 including tessellation, compute shaders and multi-threading. Trusted by gamers worldwide to give accurate and unbiased results, 3DMark 11 is the best way to consistently and reliably test DirectX 11 under game-like loads.
These will be the requirements:
  • 3DMark 11 requires DirectX 11, a DirectX 11 compatible video card, and Windows Vista or Windows 7.
  • OS: Microsoft Windows Vista or Windows 7
  • Processor: 1.8 GHz dual-core Intel or AMD CPU
  • Memory: 1 GB of system memory
  • Graphics: DirectX 11 compatible graphics card
  • Hard drive space 1.5 GB
  • Audio Windows Vista / Windows 7 compatible sound card
Graphics Test 1
  • Based on the Deep Sea scene
  • No tessellation
  • Heavy lighting with several shadow casting lights
Graphics Test 2
  • Based on the Deep Sea scene
  • Medium tessellation
  • Medium lighting with few shadow casting lights
Graphics Test 3
  • Based on the High Temple scene
  • Medium tessellation
  • One shadow casting light
Graphics Test 4
  • Based on the High Temple scene
  • Heavy tessellation
  • Many shadow casting lights
Physics Test
  • Rigid body physics simulation with a large number of objects
  • This test runs at a fixed resolution regardless of the chosen preset
We test 3DMark 11 in performance mode which will give is a good indication of graphics card performance in the low, mid-range and high end graphics card segment. The application is DirectX 11 only, meaning only so many cards are compatible. Here's a first selection.


Overclocking The Graphics Card

As most of you know, with most videocards you can apply a simple series of tricks to boost the overall performance a little. You can do this at two levels, namely tweaking by enabling registry or BIOS hacks, or very simply to tamper with Image Quality. And then there is overclocking, which will give you the best possible results by far.
What do we need?
One of the best tools for overclocking NVIDIA and ATI videocards is our own AfterBurner which will work with 90% of the graphics cards out there. We can really recommend it.

Where should we go?

Overclocking: By increasing the frequency of the videocard's memory and GPU, we can make the videocard increase its calculation clock cycles per second. It sounds hard, but it really can be done in less than a few minutes. I always tend to recommend to novice users and beginners, to not increase the frequency any higher than 5% on the core and memory clock. Example: If your card runs at 600 MHz (which is pretty common these days) then I suggest that you don't increase the frequency any higher than 30 to 50 MHz.
More advanced users push the frequency often way higher. Usually when your 3D graphics start to show artifacts such as white dots ("snow"), you should back down 10-15 MHz and leave it at that. Usually when you are overclocking too hard, it'll start to show artifacts, empty polygons or it will even freeze. Carefully find that limit and then back down at least 20 MHz from the moment you notice an artifact. Look carefully and observe well. I really wouldn't know why you need to overclock today's tested card anyway, but we'll still show it.
All in all... do it at your own risk.
OriginalThis sampleOverclocked
Core Clock: 775MHzCore Clock: 860MHzCore Clock: 1017MHz
Shader Clock: 775MHzShader Clock:860MHzShader Clock: 1017MHz
Memory Clock: 4000MHzMemory Clock:4400MHzMemory Clock: 4800MHz
Now we increased the fan RPM control to 75% which is not too noise but audible. We do this as we'll be voltage tweaking the graphics card. We reached a very nice overclock showing a bump in overall performance.
Voltage tweaking was done with AfterBurner , we applied 1250MMv on the core. Our stable end result was 1017 MHz on the core and 4800 MHz on the memory. The temps did increase, but remained stead at roughly 65 Degrees C only. DBA levels go up towards 42~43 DBa if you use roughly 75% fan RPM. All acceptable.
Here's what that does towards overall game performance.
Above Call of Duty: Modern Warfare 2, maxed out image quality settings as before with 4xAA 16xAF
Above Battlefield Bad Company 2, maxed out image quality settings as before with 8xAA 16xAF

Above we can see 3DMark 11 - the Performance mode is applied here. Compared over a reference R6850 we gain 20 to 25% performance and roughly 10% from the MSI factory clocks. Pretty impressive stuff alright.


Final words and conclusion

With the R6850 Cyclone Power Edition, MSI is offering a very potent product, in its baseline performance it is already a decent chunk faster than the reference 6850 products available on the market and there's great overclocking headroom left as well. As you have been able to see, with the help of a Voltage tweak with AfterBurner we were able to push the graphics core over a full GHz .. that's mighty impressive when you realize the default reference clock is just 775 MHz.


Despite that lovely success, we'd of course not recommend you to run the GPU continuously at that clock frequency, but for a fun benchmark run, sure why not :)
The feature-set itself is of course top notch. That improved feature set of course is grand with the new HD3D and UVD 3 update of the video processor.  There are no ground-breaking new designs aside from some feature updates. Price performance wise however you have noticed that the R6800 offers a truckload of value for money.
The cooler that MSI uses is very silent and efficient, with it's factory overclock you can expect game temperatures of up-to only 55 degrees C, so that's plenty cooling performance at comfortable noise levels as well. The fact remains that we brought this card over 1000 MHz on the core frequency without any temperature issues, a statement all by itself, it shows that the concept that MSI applies really works.
So the baseline performance of this product is grand... the cooling is just extremely silent, and all that can be purchased for roughly 169/179 EUR. We stated it several times before, the Radeon HD6800 series offers great value for money and you can play any game to date up-to a monitor resolution of 1900x1200 fairly well. As such we can wholeheartedly recommend this product.
It's really interesting to observe where MSI was with their graphics card department 2-3 years ago and where they are now.