Intel is looking to the future even as their newest CPU, the 22nm Ivy Bridge, is taking something of a beating in the media. According to a few slides that have hit daylight Intel is already working on moving some of its FABs to 14nm in preparation for their next generation of CPUs. Of course this is not that big of a deal really, Intel has moved from one process to the next like clockwork (insert “Tick-Tock” joke here).
AMD has recently released its new line of “Piledriver” CPU's, and they perform pretty darn well. No longer in its infancy stage, AMD's multi-threaded core approach has improved significantly from the “Bulldozer” line of chips. The FX-8350 is the best of these CPU's currently on the market, and retails for approximately $220.
Sharp, who is in deep financial problems, could soon get a significant financial injection from the several companies, particularly Dell, Intel and Qualcomm. The three companies are holding talks with Sharp in about investing large amounts of cash into the company. Sharp has reportedly asked Dell and Intel for $240 million in exchange for shares in the company or as a debt. Qualcomm investments should probably be somewhat smaller.
There is a new security warning for some people running virtualized systems on Intel CPUs. According to researchers at US CERT (Computer Emergency Readiness Team) the issue exists with some 64-bit operating systems when running on a hyper visor style host machine (also if the host OS is 64-bit). The vulnerability includes a method for escalation of privileges and a potential guest to host escape.
Some specs on AMD’s next generation CPU Called Bulldozer have found their way on to the Internet. In what appears to be a conglomeration of leaked slides and other info from around the web. We took a look at some of this and compared it to what we know about AMD’s existing CPU architecture as well as what Intel has to offer with their Core line up.
First let’s talk about the existing AMD CPUs and why they tend to be so far behind Intel in some performance tests. The biggest issue that we have found is in the memory controller. Where the average Intel CPU shows 18-21GB/s worth of bandwidth even AMD’s top of the line Phenom II X6 tops out at between 14-16GB/s. This is a serious issue when you are dealing with multiple CPU cores and applications that are getting more and more bloated. But why is this an issue? One of the reasons is AMD’s caching structure. Back in the days when AMD was on top their memory and cache performance was a key component of this success. Part of this was also due to the extremely low latency of DDR (I can remember buying CAS 1 DDR modules which just flew). Then when the AM2 CPUs came out with reduced cache sizes and their DDR2 controllers (which were little more than the original IMCs improved to support DDR2) the much higher latency had a huge impact on AMD’s performance especially with the smaller cache available to the CPU cores. So while we knew the CPU was improved, the actual performance was negligible.
Moving forward into the Phenom and Phenom II AMD had even more problems with memory performance on these CPUs this was despite them trying to add in more cache (and associations). The issue still revolved around the fact that the IMC for these processors had still not changed much in terms of core design. Nor had the caching structure; sure it had gotten larger but its overall performance had not improved much.
Now for comparison let’s talk about the technology behind each IMC. AMD’s Phenom II has a 144-bit DDR3 controller under the hood which according to AMD should be able to get you up to 21GB/s of memory bandwidth. The fact that we have never seen that is due to the cache structure each CPU core has two 64KB L1 cache blocks (Data and Instruction) 512KB (16-way associative) to work with while the total L3 shared Cache is limited to 6MB (64-way Associative).
Compared to Intel’s Core IMC (dual channel only) the CPU has two 64-bit Memory controllers, which allows their very different caching structure to operate a little more efficiently. Intel’s Core i7 has two levels of L1 cache per core (again Data and Instruction) each are 32KB while the L2 cache is at 256KB per core (only 8-way associative) and the L3 cache is bumped up to 8MB (16-way associative). Now that 8MB is also shared with the IGP that is on the Core i7 and is also stretched by the extra thread per CPU, but the core design allows it to operate in a way that AMD’s just cannot (at this time). There is also a lot to be said for the streamlined instruction in the new Core CPUs as well as the smaller process size.
Bulldozer, on the other hand, shows up with two 72-bit wide DDR3 memory controllers (which still add up to 144-bits) this serves four Bulldozer modules (each has two Cores) . The Caching structure is also different you get L1 at 128KB (still broken into two 64 KB blocks), 8MB of L2 Cache (2MB per Bulldozer Module) and 8MB of L3 Cache. Both the L2 and L3 are 16-way associative. The last is interesting as it moves away from the massive 64-way Association that Phenom II had.
Of course we are still only seeing 1MB of L3 per real core, but we might have hope for AMD yet. That is IF these changes to the caching and memory will amount to something. Time will tell on this one as we all know and we all are certainly waiting to see just how this new CPU (the first real new CPU in a long time) from AMD will do. I would love to see this new CPU show that AMD can still produce great products, after all it will only push Intel in making improvements of their own and at that point… the consumer wins.
Image and source ComputerBase.de
Discuss this in our Forum
You know, there is a certain irony when a company brags about a product that contains their competitor’s hardware. Unfortunately for AMD that is exactly the position they are in right now. AMD recently bought the company Seamicro (for a hand sum) for the purposes of gaming their interconnect technology. Intel picked up Cray’s interconnect unit shortly after, but there is talk that their deal pre-existed the AMD one. However, regardless of who bought what first AMD bought the whole company while Intel only picked up a certain division.
Two more pieces of the puzzle are falling into place with the move away from silicon in microprocessors. Silicon has been the mainstay for creating processors for… well for a very long time. However, it has its limitations as the need to make the transistors smaller continues to increase. Even if you are not a believer in More’s Law you still cannot get around the fact that processors (GPU CPU and “other”) are all growing more complex. This means that the number of components continues to grow and we are faced with a couple of choices; either die in the vacuum of space or… no wait that is someone else. The choices are actually very clear; make the processor dies larger and larger or shrink the manufacturing process.
It looks like Acer plans to launch a new tablet. The unit price of this new tablet should not exceed $ 100, which will align the device with the cheapest Android tablets that are out on the market. However, a low price does not mean that the device itself will be bad; on the contrary it's a pretty decent bang for buck.
Some of you might remember the days of the “P” rating CPUs. It was an interesting time when you never really knew what you were actually getting in terms of clock speed. Instead you were able to get a CPU named something like P333 or P500. This was an attempt by some manufacturers to show their “P”erformance rating in relation to Intel’s Pentium. Cyrix, AMD, and a couple of others used this to sell CPUs. Unfortunately everyone knew that the P did not really stand for performance it really meant Pentium equivalency. A Cyrix P667 was supposed to perform as well as an Intel Pentium 667 (at least on paper). Sadly this just confused the market more and we all had the fun of trying to figure out what our CPUs were really doing. Now we might be seeing the trend return, but perhaps in reverse as AMD has announced the Centurion CPU.
AMD is an interesting company; on the one hand they have some incredible ideas and can really bring some great features to life. Where they sometimes have an issue is bringing them to market and conclusion. One of the reasons for this is their lack of funds. If you do not have enough money to push your products (like Intel and nVidia usually do) then you have to rely on the community to adopt and support your goals. We have watched this happen multiple times with everything from GPU based Physics (the integration of Ageia PhysX onto the X19xx series GPUs) to OpenCL. AMD shows off what is capable and then due to lack of support and money has to step back and watch as the “rich kids” run off with the toys.