After our first run through with the Kingston HyperX SH100S3B 120GB SSD we had more than a few comments stating that 120GB was just not enough to work with. Although your typical 120GB drive is intended to be used as a boot drive with some basic applications installed on this drive it is not meant to be the only drive. Well people still did not want to hear about that so we managed to arrange to take a peek at Kingston’s next upgrade kit, the HyperX 3K 240GB upgrade kit which comes with a HyperX SH103S3 240GB SSD drive along with pretty much the same goodies you saw in the 120GB kit. So let’s take a quick look at what you have and then dive straight into performance.
Here at DecryptedTech we have always had covered a very wide range of products (as well as technologies). However, there is one item that we have never really gotten too deep into. This is direct attached storage and in particular Solid State Drives (SSDs). It is true that we do show you their performance in almost every motherboard review that we do here on the site, but we have never reviewed any SSDs exclusively. We have had many reasons for this not the last of which is there is still debate on how to properly test an SSD or HDD. While some feel that IOPs (Input Output Operations per Second) are key others want to know exactly how fast their data moves into and out of the drive. We sat down and have come up with what we hope is a good balance of synthetic and real world tests that will give you the best idea of how an SSD performs. So with that in mind we are going to dive into Kingston’s HyperX SH100S3B/120G 120GB Solid Sate Drive Upgrade Kit.
The net is full of articles talking about how this or that technology company is controlling their software, hardware, IP (Intellectual Property) or some other item that they want to complain about. You also cannot run a search on net-neutrality, DCMA, MPAA, RIAA, Pirate Bay or, of course Apple without hearing about how medieval and out dated their concepts of fair usage is. I have talked about this kind of corporate control for years as well. It I oppressive, stifles the market and Hurst consumers. However, there is one type of control that is good for the consumer. This is the type of control that Kingston is holding over their ValueRAM Server Premier memory. What Kingston has done is take their already great server memory and add an extra level of quality control to ensure maximum performance and stability. They have done this by controlling every part that goes into this product right down to the revision of chip die. Let’s take a quick look at how this works and what it means to the consumer and enterprise.
The company Rambus has shown off a new memory architecture called R+ LPDDR3, designed specifically for mobile devices. The technology is compatible with the standards of the DFI 3.1 and JEDEC LPDDR3, and offers up to 25 percent lower power consumption and higher performance than the previous LPDDR3.
Some specs on AMD’s next generation CPU Called Bulldozer have found their way on to the Internet. In what appears to be a conglomeration of leaked slides and other info from around the web. We took a look at some of this and compared it to what we know about AMD’s existing CPU architecture as well as what Intel has to offer with their Core line up.
First let’s talk about the existing AMD CPUs and why they tend to be so far behind Intel in some performance tests. The biggest issue that we have found is in the memory controller. Where the average Intel CPU shows 18-21GB/s worth of bandwidth even AMD’s top of the line Phenom II X6 tops out at between 14-16GB/s. This is a serious issue when you are dealing with multiple CPU cores and applications that are getting more and more bloated. But why is this an issue? One of the reasons is AMD’s caching structure. Back in the days when AMD was on top their memory and cache performance was a key component of this success. Part of this was also due to the extremely low latency of DDR (I can remember buying CAS 1 DDR modules which just flew). Then when the AM2 CPUs came out with reduced cache sizes and their DDR2 controllers (which were little more than the original IMCs improved to support DDR2) the much higher latency had a huge impact on AMD’s performance especially with the smaller cache available to the CPU cores. So while we knew the CPU was improved, the actual performance was negligible.
Moving forward into the Phenom and Phenom II AMD had even more problems with memory performance on these CPUs this was despite them trying to add in more cache (and associations). The issue still revolved around the fact that the IMC for these processors had still not changed much in terms of core design. Nor had the caching structure; sure it had gotten larger but its overall performance had not improved much.
Now for comparison let’s talk about the technology behind each IMC. AMD’s Phenom II has a 144-bit DDR3 controller under the hood which according to AMD should be able to get you up to 21GB/s of memory bandwidth. The fact that we have never seen that is due to the cache structure each CPU core has two 64KB L1 cache blocks (Data and Instruction) 512KB (16-way associative) to work with while the total L3 shared Cache is limited to 6MB (64-way Associative).
Compared to Intel’s Core IMC (dual channel only) the CPU has two 64-bit Memory controllers, which allows their very different caching structure to operate a little more efficiently. Intel’s Core i7 has two levels of L1 cache per core (again Data and Instruction) each are 32KB while the L2 cache is at 256KB per core (only 8-way associative) and the L3 cache is bumped up to 8MB (16-way associative). Now that 8MB is also shared with the IGP that is on the Core i7 and is also stretched by the extra thread per CPU, but the core design allows it to operate in a way that AMD’s just cannot (at this time). There is also a lot to be said for the streamlined instruction in the new Core CPUs as well as the smaller process size.
Bulldozer, on the other hand, shows up with two 72-bit wide DDR3 memory controllers (which still add up to 144-bits) this serves four Bulldozer modules (each has two Cores) . The Caching structure is also different you get L1 at 128KB (still broken into two 64 KB blocks), 8MB of L2 Cache (2MB per Bulldozer Module) and 8MB of L3 Cache. Both the L2 and L3 are 16-way associative. The last is interesting as it moves away from the massive 64-way Association that Phenom II had.
Of course we are still only seeing 1MB of L3 per real core, but we might have hope for AMD yet. That is IF these changes to the caching and memory will amount to something. Time will tell on this one as we all know and we all are certainly waiting to see just how this new CPU (the first real new CPU in a long time) from AMD will do. I would love to see this new CPU show that AMD can still produce great products, after all it will only push Intel in making improvements of their own and at that point… the consumer wins.
Image and source ComputerBase.de
Discuss this in our Forum
Corsair has introduced the Extreme Vengeance DDR3 memory kit, which is currently the fastest available memory module. It is a dual-channel DDR3 memory kit with two plates of 4 GB (8 GB total) that run on a standard clock speed of up to 3000 MHz at voltage of 1.65 V and 12-14-14-36 latency.
Las Vegas, NV, CES 2013 – One of our must see companies at CES is Kingston. We have been partnered with Kingston since early 2006 and they are truly one of our favorite companies not only for the things they do, but also for the people that are behind the PR. Yesterday we stopped by their showroom (they had a half ballroom at Caesar’s Palace to see what they had going on. When we walked we saw quite a few displays that contained the history of Kingston Memory and storage products. These cases were quite full considering the fact that Kingston is 20 years old and their HyperX line of memory products is 10 (their first HyperX memory module was DDR… just DDR). However they have come a long way and are now one of (if not the) leading memory and flash storage company in the world.
Toshiba has developed a special version of MRAM memory with low power consumption and high performance, intended for processors that are built into smartphones and tablets. MRAM is supposed to replace the conventional cache-based SRAM cells in SoC's. These SoC's should be spending up to two-thirds less energy due to the new technology. Admittedly, it is not clear whether the cost savings are related to the complete SoC's or only the cache.
The Flash Memory Summit wound down on Thursday after a four day run at the Santa Clara Convention Center in San Jose. The show floor was fairly crowded with over 5,000 attendees and a sold out exhibition space.
One of the items we have always beat AMD up on is there poor memory performance in their CPUs and APUs. This little issue is what has separated AMD from Intel since the AM2 days. It has always been understood that latency has a massive impact on an internal memory controller. As you latency increases your efficiency decreases. You can offset some of this by enlarging your cache and also optimizing the CPU to use it more efficiently. This is one area that AMD has traditionally had issues with, even going back to the Athlon 64 we saw them reducing cache sizes to remove problems and bump performance.