Data Transfer Speed Hardware Config

Alan asked:

First off, I am sorry if this gets a little scattered, it encompasses a large problem for me I have been dealing with for some time.

Here is a little background: I operate an educational website with video on demand for online classes. For 4 years we hosted all our own media using 6 dedicated Wowza servers around the world to deliver our media. Our cost on that was around $1,500/month.

In an effort to improve service, we enlisted a CDN, which has been doing well, but now our hosting costs are up near $6,000/month and we would like to go back to hosting 98% ourselves, and only outsourcing to a CDN as a last resort with a rollover script.

When we had all our own dedicated servers, we would typically use a 2X quad core 2.66GHz w/ 16GB ram and 2 SSD in a RAID0. Even though we were ordering the same thing from the same hosting company, just in different parts of the world, we would notice large performance differences, that seemed to be on a hardware level, never network level.

We struck a deal to do some collocating with them at a great price, but now I am stuck with trying to determine how to get the best performance for what I need.

================================

Now for my question

Say I want the best data transfer / seek times to deliver the highest number of simultaneous videos? If I have 1000 users online at a time, they may have up to 250-400 individual video files open at the same time. I realize I can get 8 SSD SAS drives and put them in a RAID, but what about the processor, or RAM?

Looking on ebay, I see stuff like:

PowerEdge R810 1U Server (4X) 1.87GHz Eight-Core Xeon L7555 192GB RAM
POWEREDGE R810 SERVER FOUR X7550 2.0GHZ 96GB

From a processor perspective, I can find ones with 8MB – 30MB L3 Cache, but does it really matter for this? Am I better off with 2 quad core processors, or do I need 4 eight core to really get the most out of this?

I understand from the software vendor that more ram is better when you have multiple files open, but overall, they refuse to give more details about what type of hardware will actually give you a specific result. All they say is:

http://www.wowza.com/products/streaming-engine/specifications
High-load recommended production hardware
CPU: Dual Quad-Core or a single Hex-Core, 3.00 GHz or better
RAM: 16-32GB
Disk: 2 or more in RAID 0 (striping)
Network: 10Gbps Ethernet

Which is great, but it doesn’t say what the result would be with this config. I typically on any given day need to be able to stream 15,000 videos over the course of the full day, with 1,500 peak simultaneous. From a bandwidth perspective, I could achieve that with a single internet connection, if the hardware could keep up. I know there are benefits to having multiple locations, but I could still save over $50,000/year if I can just figure out the hardware issues.

In the end I guess I am wondering if I did have a super high performance RAID, what is the order of things I need to worry about next? Should I focus only on clock speed, L3 cache, RAM?

=================================

Update:

I purchased a new server and am learning more about RAID configurations and performances, if you want to follow along, you can see the next part of the saga here:

New RAID hdparm slow

My answer:


I don’t think it’s possible to say: Buy X, install Y, and your problem will be instantly solved. This is going to take several iterations to get through, and you’ll need to put more work into identifying the bottlenecks when they inevitably appear. For the most part I’m going to avoid recommending specific hardware, as that would be out of date by the time I click the submit button.

So, since we don’t have good data on what the bottlenecks were, let’s just pretend this is a greenfield project. This is how I would approach it:

  1. Video streaming doesn’t require that much RAM or CPU, but it does require fast storage. Let’s say four 400GB SAS SSDs in RAID 10, for 800GB usable space, to store your videos. You may want to increase that, though, if you plan to have a lot more videos to serve in the next few years. Say, four 800GB SAS SSDs in RAID 10, for 1600GB usable space.
  2. Don’t cheap out on the NIC. Your NIC should support, at minimum, TCP/IP offload (even if you end up not using it), receive side scaling and receive side coalescing. Some NICs such as those from Intel have further features to improve performance. You might spend a little time researching this.
  3. Though you do need to pay attention and make sure your server’s networking is configured and tuned well, network throughput problems are sometimes the fault of the network infrastructure. If you’re colocated and don’t control this, be prepared to argue a lot with your datacenter. In particular, make sure that you aren’t buying a network port which is throttled to some fraction of the link speed, and that the datacenter actually has more than enough bandwidth to accommodate you at peak times.
  4. The CPU doesn’t matter a lot, but it does matter. The web server won’t use much CPU, but processing interrupts from the NIC may use as much CPU as the web server does, and quite possibly more. You probably don’t need something top of the line, but you shouldn’t cheap out here either.
  5. You will have additional tuning work to do if you use a dual-CPU system. In such a system each CPU can access half the RAM quickly, and the other half more slowly, and vice versa for the other CPU. This is called NUMA, and you will need to watch out for bottlenecks related to your web server or interrupt handling running on one CPU and accessing memory from the other. Linux does include tools to help you deal with this.
  6. You don’t need that much RAM to serve videos, but your server will make use of all the RAM you can give it, as a very fast disk cache. You aren’t likely to hit a bottleneck here, I think, so I’d start small and upgrade the RAM if necessary. 32GB might make a good start; 192GB would be overkill unless you already knew you needed it.
  7. I would use the nginx web server. I always do, since it is much more capable of handling thousands of simultaneous connections than Apache, by default. You may need to increase the number of file descriptors on the system though.
  8. I would build this on Red Hat Enterprise Linux 7, and purchase and keep the subscription active for the life of the server. Besides Red Hat’s comprehensive documentation on its distribution and on performance tuning, and its extensive knowledge base, Red Hat Support can help you identify and resolve bottlenecks when they appear, which is easily worth the price of admission.
  9. Be prepared to upgrade components if circumstances warrant. You might want to upgrade the NIC, RAM or the CPU based on actual issues discovered after you go into production.
  10. All of this assumes you are building multiple servers, and that requests are load balanced between them somehow. You should build the servers with more capacity than they need, so that in the event one fails or needs to be rebooted, upgraded, etc., the remainder can take up the slack.

View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.