The Most Reliable Hardware of 2013 According To Puget Systems

Puget Systems has been keeping track of all sorts of hardware data for the systems that they built in 2013. One of the most important things Puget Systems tracks is the failure rates of individual components. Reliability is critical for them as a company, so this data is saved to see if there is a failure trend when it comes to individual components, product lines, and overall brand failure rates. With 2013 coming to a close, Puget Systems was nice and shared this data with the public! The failure rates observed are very low, but Puget Systems uses parts for big name hardware makers that are known for their reliability.  For example, they have a zero percent failure on Kingston HyperX DDR3 1600MHz 4GB Low Voltage memory kits! 

In addition to tracking their own systems, they also tracked failures for home built systems versus their own built systems and found some rather interesting results. Be sure to take a look as they show that Intel/NVIDIA systems have a lower chance of having an hardware failure against an all AMD system!



“What this shows is that if you built an Intel/NVIDIA GeForce system yourself, based on past failure rates you have about a 1 in 7 chance of there being some sort of hardware problem. But if you purchase the exact same system from Puget Systems, this risk goes down to a 1 in 30 chance since we catch the majority of the hardware problems before you would even see the machine. Similarly, if you build an AMD/AMD Radeon system yourself, you have a a 1 in 5 chance of having a hardware problem versus a 1 in 27 chance if you purchase the exact same system from Puget Systems. In short, our data indicates that you are approximately 4-5 times more likely to encounter a hardware problem when building a computer yourself than when purchasing a complete computer from Puget Systems.” – Puget Systems


    more than 10 years building my pc just one psu failed. Just do what these companies do. Good psu and mobo with a 6-phase (and more offcourse) supply for the cpu and always check the QVL lists.

  • praack

    part of the reasons include: the controlled environment for the build, the experience and training of the staff , the QA of the supplies to the vendor and of course the QA/QC steps prior to sending the PC to the customer.

    all of these are not followed or usually available to the home builder- most are like myself- build occurs on the kitchen table, mug of coffee nearby, burn in (what’s that? ) bingo machine done in a day ( guess it works only blue screens once in a while)……… 😉

    Get the picture… so yeah buying would always be more reliable- but in the end it is not as much fun

  • David Gilmore

    No surprises here – my overall impressions from reading hundreds of reviews back up these figures as far as the red-green team comparison, and personally I’ve had much better luck with Intel/nVidia systems. It really just boils down to “you get what you pay for”. As for Puget systems being much less prone to failure than home built systems, that also makes sense – they’re professionals who do this all day, whereas the average home builder has only done it once or twice, or once every couple of years.

  • John Conatser

    oh guys, its so simple, Fords break just because they are fords

  • Refillable

    AMD/Nvidia will have the lowest probability of failing

    • KILLPC

      Not necessarily!

  • Paul Margettas

    Can’t say I’m surprised. These companies probably pay for extra quality control.

  • anon

    Since when the composed probability of failure is the sum of the probability of its components? Then if something has 3 parts of 50% rate it means it has a 150% failure rate? DA FUCK STATISTICS.

    • basroil

      It does mean that the chance of a failure involving two or more parts is very high, and in computing the chance of multiple failures is real but limited depending on what fails (psu failing may break others, memory not much). Certainly not simple addition, but for small numbers where the probability of multiple simultaneous events is small it’s close enough

      • anon

        Small numbers its usually a quality issue, so “smal” means about 2,5% total. I would say, actually, that small numbers are 6sigma. For composed you’ve got to perform ANOVA asuming they all follow normal laws.

        More than that, how are weighted this failures? And, do they talk about DOA or in the first 10h or 100h? Does the DIY numbers include the courier (which is usually the highest contributor to DIY DOA components)?
        Dont get me wrong, but say 1/5 AMD, 1/5 INTEL, DIY its high enough to alarm anyone, without more data.

        • KILLPC

          I think the purpose of this article is advertising puget systems. I am not sure 😛