Many of the growing issues with IP video systems today, such as bandwidth, storage and maintenance costs, are the direct result of this centralised storage architecture.
The security industry has adopted a centralised storage approach for IP video surveillance systems. Manufacturers followed traditional data centre designs, assuming that it was best to use the standard architecture for information technology (IT).
Here is the problem: data centres and most IT systems are designed for many users accessing data servers in one location. This is called a ‘one-to-many’ model, since each data centre serves large numbers of users. Most experts assumed that this would also be the best model for security video as well, but this is wrong. Sensor networks have exactly the opposite requirements.
Video surveillance systems include dozens or hundreds, and sometimes thousands of cameras, all spread out across the network, with only a few users. Cameras supplying the data far outnumber the users, and there is no way to centrally locate those cameras. They need to be out at the edge of the network.
This ‘many-to-one’ architecture generates very different demands on the system. For example, typical enterprise servers spend about 50% of their time reading data stored on their hard drives, with the other 50% spent writing new data.1Surveillance systems, however, require non-stop writing of data to record the video, 99% of the time, but only about 1% to play it back.2
Second, data centres usually need a small number of storage units to support 20 or more different enterprise functions. So, the same storage servers can be used for a wide range of applications. But surveillance by itself requires extremely large amounts of storage. “The consequence is the video requirements can put a speed bump in IT plans to reduce costs through consolidating storage,” according to Steven Titch, editor of Network Centric Security. “Plus, in video,” according to Lee Caswell of Pivot3, “each time storage capacity increases, there needs to be a commensurate increase in bandwidth. That breaks the mould.”3
“The increasing reliance on cameras for security presents storage costs that can easily spiral out of control,” according to Titch. This is why server manufacturers have been scrambling to develop better solutions for video surveillance.
The problem with bandwidth is even more severe, since almost any large deployment of cameras will strain available WANs and wireless networks. In addition, remotely accessing video via the Internet is an increasing need. This often forces video storage closer to the edge of the network, into smaller servers placed at local sites. However, even on local area networks, where adding equipment to expand bandwidth is simple, IT managers are still concerned about the demands made by continuously streaming video cameras. They often insist on separate networks for video, to isolate the cameras and protect their enterprise information systems.
Security managers are generally just as concerned, because they cannot afford to lose video recording if the data network goes down, for maintenance or any other reason. For these two reasons, added networking costs are often required.
Hard drive failures are by far the number one cause of equipment failure with security video. If you centralise video storage, then a single point of failure puts at risk the data recorded from 16, 32 or more cameras. For this reason, IT managers require RAID storage and sophisticated management systems that can automatically redirect video streams during storage node failures.
However, traditional RAID 5 storage approaches are often inadequate.
“This issue is unique to video recording and seldom surfaces in RAID system used by other applications,” according to Carl Lindgren of Sycuan Gaming Commission, “The key is that for most applications, written data is verified during the write process.” But the continuous non-stop nature of video does not allow time to verify. “A drive could happily chug along writing data that is unreadable for a long time. Neither the system nor the operators would ever know that there is a problem,” says Lindgren. If an error occurs on more than one disk, a RAID 5 system cannot recover the lost video. This problem is forcing the move to more expensive RAID architectures.4
On top of all these issues are the skyrocketing costs for installing new data centres and maintaining them, along with the double digit growth in the numbers of servers needed in each data centre, and rapid growth in the total number of data centres.5 While servers once consumed about 50 W, before the year 2000, they now draw about 250 W each. In addition, according to Intel, you now need another 170 W to cool each server today.6
Read More: >>> http://securitysa.com/regular.aspx?pklRegularId=4421
Thanks Doug Marman/VIDEOIQ, SMARTIPVIDEO