Technodyna’s Case Study of A16S3-PS with SW16-G3 — Part 1 of 2 / The Data Centers under the Coronavirus Pandemic

Technodyna, a media technology solutions provider based in Egypt, provides workflow design, implementation and consultancy services to broadcasters, post houses and media archives operating in MENA region. The engineering team has a vast experience in both technical and operational aspects of media and film workflows.

Headquartered within a cost sensitive but extremely demanding market, Technodyna had an urgent need to find a shared storage solution that is capable of providing the cutting edge performance demanded by Technodyna’s high profile clients without breaking their budget.

“After performing intensive market research, we chose Accusys to be the main storage provider for our market needs. Accusys products are very capable of supplying impressive performance with minimal hardware and at a very reasonable price. Today, we find that even the older model Accusys storages we supplied to some of our clients five years ago are still meeting clients’ needs today. The systems are running native PCI Express on 20Gbps QSFP cable, and performing more than double the 10 GigE currently sold on the market today. We all know how picky and cutting edge Apple Inc. is, and the fact that Accusys were chosen by them to manufacture their famous Xserve RAID storage confirmed we made the right decision and proved very high credibility to us and to our clients,” said Engineer Ahmed Madkour, Technodyna’s CEO.

On their last deal, a new film restoration center had an operational requirement of 6 x 4K dpx streams. A single 4K dpx stream is 1250MB/s. Multiply that 8 times to covert to megabits then multiply the result by the required 6 streams and you end up with a huge 60,000 Mb/s SUSTAINED performance need. The client’s tender specified 8000 MB/s (64,000 Mb/s) as a requirement. Not only such a great performance was required, it also specified that any of the 3 workstations needs to be able to have 2 concurrent realtime streams of the specified data rate simultaneously in order to be able to perform realtime rendered tasks with no delay. On a realtime render, you need to be able to simultaneously read and write to the storage in realtime from a single workstation which translates into a dual stream performance requirement. You might be able to achieve the overall storage performance providing a realtime stream to each of a 6 clients workflow for example, but you won’t be able to render in realtime on a single client directly to the storage unless that single client connectivity can accommodate both the read and write streams. Also while doing editing all timeline transitions, dissolves and picture in picture effects all have a similar requirement demanding to be able to read 2 streams simultaneously if you need realtime performance.

Around 700,000 US dollars out of the 1,100,000$ total project budget were directed to buying a film scanner with a wet gate. The remaining sum was to be consumed in buying a 500 TB storage capable of providing the aforementioned gigantic performance, along with Phoenix Film restoration system from Digital Vision, a second restoration system PFClean from The Pixel Farm, a Davinci Resolve color grading suite from Blackmagic Design, all running on 3 of ‘the world’s most powerful workstation’, as they call it, the HP Z8 workstation and loaded with top-notch NVIDIA Quadro cards. Add to all that a MAC Pro based AVID Protools audio restoration system, a Lipsner-Smith CF-9200 ultrasonic Film Cleaner from Media Migration Technologies and all other components of the workflow. Most competitors claimed that this was impossible to achieve and insisted that either the client will have to increase the budget, or the requirements had to go lower.

While competition were suffering to meet the requirements with the provided budget, and were squeezing their margins because they needed a huge bulk of disks to guarantee the IOPS in a NAS configuration, or switched to SAN for faster connectivity which spiked their offerings way beyond budget. What did Technodyna come up with to resolve this dilemma? Please see our next issue of newsletter for details.

The Data Centers under the Coronavirus Pandemic

“Stay at Home Economy” Causes Explosive Growth of Always-On Data Storage

With the outbreak of coronavirus (COVID-19) pandemic, the whole world has been thrown into emergency response; many cities have been sealed off with restricted access to stop the spread of the virus. Front-line medical staff have been called into action, treating the patients and rescuing the dying, and governments have intervened to guarantee medical supplies and steady markets.

Behind the scenes, government and enterprise data centers have played important roles in powering communications among panicked populations trying to protect themselves and provide for their families. Perhaps more than ever before, we can see the importance of always-on, high-availability, highly reliable and scalable IT infrastructure for delivery of critical information services in time of crises like the coronavirus pandemic.

VersaRAID, a VersaPLX-enabled, scale-out, high-availability storage solution from Loxoll, is ready-made answer for these demands. An ARM-based Server SAN solution, it offers outstanding scalability for storing and protecting business-critical data assets.

 

The Importance of Data Availability and Ability to Scale in the Digital Age

As demonstrated with viral spread of COVID-19 around the world, online data needs, information processing loads, and storage demand mirror the exponential growth of the virus itself. Effective response to this growth not only takes operational agility, but it also takes forward-looking planning. As shown by the coronavirus pandemic, the potential risk of system crash due to the excessive traffic loads in data centers has increased. System crashes not only negatively affect customer experience, but may also cause major data loss if mishandled, resulting in sometimes irreparable economic loss.

Therefore, guaranteeing high availability in data centers is of the utmost priority. On one hand, such situations represent “trial by fire,” but in the best sense, it also creates opportunity for data centers to improve their operating capabilities and efficiency. Situations like today’s make it clear why many large Internet enterprises have been expanding their storage capacity recently. Nowadays, compared with the traditional centralized storage, scale-out, distributed storage has become a preferable choice of enterprises.

Under the coronavirus pandemic, the operation and maintenance of data center are key to delivery of non-stop service availability, and particularly, when access by operation and maintenance staff have been limited by travel and presence onsite. It’s one reason to consider transition to unattended data center operation. The trend toward automated and intelligent management of unattended data center is unmistakable.

Our Storage Solution that Offers Data Availability and Scalability

One clear answer is the VersaRAID – VersaPLX-enabled scale-out PCIe storage solution.

VersaPLX is a purpose-built, ARM-based, generic Server SAN appliance that transparently provides data services from any mix of host and data storage systems. Multiple VersaPLX devices can be clustered together to form networked storage, i.e. Server SAN, for performance scaling and high availability. VersaPLX is based on Ubuntu Linux with extra computing ability and can easily take on open source software to add additional features. Customers may use remote control tools, together with VersaPLX’s GUI management software, VersaTEL, to realize remote intelligent management; minimizing human intervention and operation errors.

VersaRAID architecture consists of Server SAN appliance with external PCIe connection (VersaPLX-PCI) and PCIe storage (ExaSAN A16S3-PS). The basic configuration is one VersaPLX-PCI plus one A16S3 with maximum capacity of 256TB (using multiple 16TB hard disks). The configuration can be extended to a single VersaPLX with four A16S3-PS, each A16S3-PS can be connected with up to three JBOD (A16S3-SJ), to have a maximum capacity of 4PB (4096TB) in total. By clustering multiple VersaPLX appliances, the overall storage capacity can be dynamically increased based on actual needs, all the way up to unlimited storage expansion.

⇒ Follow Us ⇐

Stay connected with our social media for the latest updates