By Amy Fraley, John Loughmiller, and Robert Drake
Video is everywhere. Using it to enhance an architectural design will make any project come to life. Video integration can be complicated, however, and with so many different video formats and new technology it can be a little overwhelming. This article will look at the basic fundamentals of working with mixed video formats. It will also examine some new technologies that will make video installation an architect's friend instead of enemy.
What is the Real Difference?
Several challenges must be addressed when retrofitting existing structures. An interviewing process that reveals what real time and legacy (archived video) content are available can determine what types of video should be distributed within a building. Equally important is discovering what plans are in place to change formats and capabilities going forward. To fully understand the answers, it's useful to know that in the United States there are three basic types of video signals: analog, component, and digital.
With the exception of multinational companies that may need to display video from other countries, the analog video system or its component video sibling will be the source of virtually all legacy video material existing at a client's location. Analog video uses 525 interlaced horizontal lines to make a single frame that is transmitted at 30 frames per second to create moving pictures. A television frame is made up of two fields, each field containing 262.5 lines. One field contains odd-numbered lines (of the 525) and the other contains even-numbered lines. Video and color information contained in the analog signal (analog = infinite voltage levels within predefined limits) are integrated into a single entity distributed via coaxial cable.
Component video separates the luminance portion of the analog signal from the color portion, allowing higher resolutions and better signal-to-noise ratios. The most common approaches are YUV (a process that treats the color portion as one entity and the luminance portion as another), component video (which is a subtractive process involving breaking out the red and blue signals from the combined luminance and green signals: Y, R-Y, B-Y), and RGB (which uses the combined red, green, and blue signals to represent the luminance portion of the video and then breaks out the three colors to create any known color). For an architect, things are a little stickier with component video than pure analog video because special cables may be needed. Component video uses three RG-11 cables instead of just the one needed for a regular analog video feed. Although it's possible to distribute high definition digital television via component video using a conversion process, the resolution will never equal a pure digital methodology. It's a trap that awaits the unwary and a recipe for an unhappy client.
Both standard digital video and high definition digital video begin by sampling a very high resolution analog signal and assigning a binary code to each sample. The granularity of the sample (the number of bits used in the encoding) and frequency of the sampling (how often a sample of the analog signal is taken) are two things that make standard definition digital video different from high definition digital video (along with the degree of signal compression, a subject beyond this simple tutorial).
Digital video may or may not be high definition but HDTV is virtually always digitally distributed in commercial and high-end residential environments. Although the basics are similar, HDTV requires significantly more bandwidth than standard definition video; the system designs are similar to those employed when designing a building that has a high-capacity local area network infrastructure. Just like high-capacity computer networks, digital television requires enormous bandwidths and therefore special cables and techniques. SDI (Serial Digital Interface) and HD-SDI (High Definition Serial Digital Interface) signals require only an RG-11 cable to distribute if the runs are kept short. Segment amplifiers are used when longer runs are needed. Other popular digital formats are Digital Video Interface (DVI) or High Definition Multimedia Interface (HDMI™).
There are several important factors to keep in mind when considering HDMI or DVI as a cable choice. Maximum cable runs may be affected by all elements of the system including quality of cable, the receiver chip inside the display or projector, and the source output. High-quality cables will prevent signal compromise and are always recommended. Equalization circuitry in a display device will help compensate for a weaker signal and allow for a slightly longer cable run. If a longer run is needed there are some viable options. One option is to use an HDMI or DVI extender. HDMI and DVI extenders work by reconstituting the HDMI or DVI signal at the end of long cable runs prior to the input to the display. Remember to always check the specifications for all HDMI or DVI items because they differ with each manufacturer. It is also important when specifying HDMI products to only use products that have been fully tested by one of the HDMI authorized test centers. Signals using HDCP (High Definition content Protection) require equipment that's certified to industry standards for proper operation. For more information and details on HDMI and DVI please visit www.hdmi.org.
This edge blending display in a movie theater is accomplished with two projectors and a video processor. COURTESY OF TECHNOMEDIA SOLUTIONS LLC
The vertical resolution buzz words used in HDTV are 720p, 1080i, and, for the best resolution, 1080p. The "p" means the video's horizontal lines are progressively presented from top to bottom of the display; the "i" means the picture is broken into two fields when distributed and interlaced back together when displayed. For the architect, the higher the resolution, the better the equipment specifications must be, including very low cable losses, double the normal shield coverage, and as a flat a frequency response as possible out to several hundred MHz.
When planning cable routes, never place video cables in the same conduit with power cables and always cross all power cables at a 90-degree angle, staying as far away as possible from the power cables. If RF distribution is planned (many video signals encoded as television channels and distributed on a single coaxial cable), treat those cables as if they were both power and video because in some respects they are. Keep them away from both true video and true power cables to avoid interference and cross-talk problems. This won't be a huge issue in new construction but may be an issue in retrofit buildings, which brings us to our next subject of how to avoid some possible snags by using wireless video technology.
Here's how a typical high definition wireless AV system would work and the equipment needed. COURTESY OF AVOCENT
Don't Touch a Thing
Wireless video technology is quickly gaining a foothold with systems integrators and can provide a seamless way of building a digital video network in a space or structure that would otherwise be prohibitive. The wireless system consists of a transmitter and receivers. The transmitters and receivers work in unison to form a managed audiovideo extension network able to deliver a synchronized stream of high definition computer graphics or video and associated audio from a source to as many as eight display devices in a wired or wireless manner. Display device control data, content protection, and interactive device control signals, including IR and serial, are passed from source to sink through the extension network, providing a fully managed solution.
Some of the newest wireless systems are designed for high definition media support used in professional AV applications and will support HDMI video and embedded audio, digital and analog computer graphics with analog audio, and component video with analog audio. Some receivers even offer coaxial and optical audio.
Depending on the size of the project there are two ways to run a wireless system. The first is by keeping it completely wireless. There are a few distance limitations with this method; for example, one wireless manufacturer1 can transmit 720p or 1080i high definition signals through walls up to a distance of 150 feet or up to 1,000 feet if there is a line of sight. For installations that need to go longer distances, a portion of the system will need to be wired, with wireless beyond that point. Interference with other wireless devices has been solved by operating on frequencies outside of the local area network (LAN).
Integrating a Dazzling Larger-than-life Display
With clients wanting larger-than-life video displays that can become very expensive, it's important to offer creative ideas and different technologies. One such technology is edge blending, which is a method used to create a wider, taller, or larger video display. This is done by using two or more video/data projectors to create a single image. Each projector will output a portion of the desired image and overlap with the outputs from the other projectors; using an edge-blending process, the image edges are blended together to create one clear and bright, seamless image.
This process has been used for many years with displays often being seen in staging and live-performance venues. It is becoming more popular across a range of industries and markets such as churches, government, and even in places such as galleries, art centers, museums, retail, and educational buildings. Edge-blending techniques offer the ability to make large and clear displays simply and efficiently, with little hardware and space. They offer freedom to create anything without size or shape limitations. Three-dimensional edge blending can provide a full 360-degree blend that envelops a person completely and creates a full and lifelike illusion of standing anywhere in the world or even the universe.
Although edge blending is not the only solution to obtaining bigger video displays, it is generally thought to be one of the most efficient and cost-effective. Larger LCD screens can create a larger image, but monitor sizes are limited and, more often than not, several monitors are required to create the desired image size, resulting in many heavy and expensive LCD screens needing to be displayed together. Even large LED walls are not always the best solution as they can also be very expensive, bulky, and heavy.
When edge blending is the desired choice for the large display, there are several things to keep in mind. It is critical that the projectors are stable to the surrounding area and also with each other. This can sometimes result in complex rigging being assembled to house all the necessary projectors but it is extremely important that it is done correctly. If projectors are housed independently of each other, one projector can often be susceptible to movement, disrupting the entire blended image.
Used worldwide, edge blending is a cost-effective, efficient method of producing innovative, stunning, large displays with clear and seamless images. The displays can be even more intriguing depending on the screen used. Almost anything can be a screen, including glass. This opens up the use of projectors in places that have ambient lighting issues. New technology is increasing the options for large video applications.
Above: Aligning two images to overlap without edge blending produces a bright area where they merge.
Left: Edge blending gradually reduces the brightness of the projected image edge.
As the world becomes more visually stimulated, clients will want to follow suit with their buildings. Architects can stay ahead of the game by knowing the basics about technology and staying abreast of new trends. An AV installer should be involved with planning from the start to give recommendations and advice and help the project run smoothly. The backgrounds of the installers should be checked because many specialize in certain areas. To find a reputable AV installer, contact the National Systems Contractors Association (www.nsca.org) or InfoComm International (www.infocomm.org).
) is the marketing manager for TV One and deals with video systems integrators throughout the United States, staying on top of the newest trends and innovative ways video products are being used. She has worked on more than 30 application stories in 2 years at TV One. John Loughmiller
(firstname.lastname@example.org) is a communications engineer, author, and owner of Technical Support Group, a company specializing in product technical evaluations, preparation of technical documentation, and case studies. He recently retired as TV One's engineering manager.
Robert Drake (email@example.com) joined TV One in 2002 and has been a member of the development team on early CORIO2 products and Windows Software. In 2006 he became the technical research engineer developing ideas for new products and new product features.