"real computers"? its arguable that servers are the real computers many companys like hp have much larger server branches then they do desktop. If you look at 2011 as a quick example first off pc sales where down and server sales where UP 9.5 million server units where sold and 359 million pc units where sold when you consider that a server is Easily 5-6x more expensive then a desktop your looking at a huge part of a company's sales. Before there is allot of complaining about my 5-6x comparison we are not talking what YOU could make a server for or you'r pc consider that MANY people still buy there pc at best buy for between 400-700 dollars and company regularly buy OS sever licenses alone for 1000$. This could be a big move for amd to steal a big part of the server market.
Is it me or is it that Intel, AMD and all the other chip manufactures making up cryptic names for their products? Like A-series A4-3400 or Core i5-2500k or something? I'm starting to have a hard time differentiating from AMD's A,Z,G,E,C.
whats worse then that JeTJL is the mobile chips i5-2430m are you kidding me? I care about this stuff and i end up having to look it up how any normal laymen on the street could have any clue is beyond me.
this is kinda off topic but i have to say it. . I thin AMD has the potential of detroying Intel on the gaming side. My idea is, i.e i take the FX 4100 chip (yes, considering BD is a disapointment). Make the 4100 chip (can take any other chip) an APU with more then 1 kind of IGPu on it. So there will be an FX4100 with 77xx iGPU, 78xx iGPU and with 79xx igpu. I am not saying to integrate an 7970 to the CPU but make it to be able to xFire it with an 7970 and get an 30 - 40% performance boost. (i know xFire scales higher, but considering that the iGPU its not a full GPU it makes sense to get only arround 30 -40% perfomance). I think APUs have a good future, i am not a AMD fanboy (considering that i have the i7 2600k and an nVidia gtx 460 card )
RTOS has nothing to do with servers. It's predominantly used in embedded applications. So this can be a platform for a smart TV for example, not a rendering farm. Automotive applications are another possible area.
Article failed hardcore when mentioning RTOS's used as webservers.
RTOS is used when the execution time of all programs must be predictable for consistent and reliable I/O monitoring. This includes pretty much all medical devices, military devices and Geo-spatial (satellites / ect..) devices. There are others like your car's computer, not the GPS or media player but the ECU which regulates your engine.
Anything that has limited tasks running and absolutely CAN NOT FAIL.
Here's a rewrite attempt for possible applications.
"An example of this could be an autopilot system for an automobile, in which output control signals must be generated with a consistent frequency and adhering to a latency maximum. The parallel floating-point processing power of the APU facilitates real-time processing of video and radar inputs to provide the system with a virtual model of the physical environment, and the real-time operating system ensures that steering, throttle and brake control signals do not suffer from latencies induced by process switching, memory refresh cycles, hard disk access times, software interrupts, and other unpredictable events that could otherwise occupy one or more cores for several milliseconds."
Long ago (in a galaxy not so far away) I worked at a factory equipment manufacturer. This equipment had to respond to one particular input within 0.2 milliseconds or the equipment would not work properly for the intended purpose. The original design they employed to control a high-performance stepper motor was a custom-designed PCB with a combination of analog and digital circuits, user input controls, and output amplifiers. They tried to replace this design with a computer-controlled stepper motor that used a proprietary operating system, programming language, and set of control signals in the hopes of reducing materials and assembly costs. The problem was that this "improvement" randomly introduced up to a 20 msec delay into the response times for that critical event. In practice, about one out of every 10 or so (randomly seelcted) responses were delayed by up to 20 milliseconds. This effect was caused by a poorly-written hardware interrupt handler in that proprietary operating system. The new device was useless for the intended application, because it was not a real-time operating system.
[citation][nom]palladin9479[/nom]Article failed hardcore when mentioning RTOS's used as webservers.RTOS is used when the execution time of all programs must be predictable for consistent and reliable I/O monitoring. This includes pretty much all medical devices, military devices and Geo-spatial (satellites / ect..) devices. There are others like your car's computer, not the GPS or media player but the ECU which regulates your engine.Anything that has limited tasks running and absolutely CAN NOT FAIL.[/citation]
Yeah, no one likes misfiring engine. In the 1970's, there was a car company that attempted to use a computer-controlled automatic transmission to improve fuel efficiency by 5%-15%.
The problem was that the microprocessor chip was too weak to handle the data load. That kneecapped the engines severely.
[citation][nom]amk-aka-Phantom[/nom]In other words, AMD again can't find any use for their junk in real computers and decides to boast the fact that they're powering some niche stuff.[/citation]
umm....niche stuff is where real work happens. Things that are a lot bigger than consumer rubbish. For example, AMD chips power controllers for high volume, high relieability Oce digital production 320 ppm printers.
server workloads are not computational intensive, they are operational intensive, data servers dont care for FLOPS (floating operations per second) they more interested in IPS (instructions per seconds), architecture of modern GPUs (mass quantities of efficient stream processors) means they can easily outperform CPUs in the IPS race