Category Archives: Media & Events

It’s Time to Opmtimize Adaptive Streaming

Online and Over-the-Top (OTT) video viewership is increasing at a tremendous rate. At the same time, the tolerance for poor quality of experience is shrinking to unperceivable levels. This combination of factors is going to drive significant optimization in adaptive streaming technology in the coming months and years.

Over the last several years, we’ve seen traditional UDP style streaming technologies fade away in exchange for http based adaptive streaming formats like MPEG DASH, HLS, HDS, and Smooth. This shift provides a number of benefits. By using short but aligned video segments of varying quality levels , adaptive video is able to leverage the strengths Content Delivery Networks (CDNs) already have in their caching and edge infrastructure to deliver smooth video over the Internet.

Let’s touch briefly on how this works. In adaptive streaming, performance is regulated through heuristics built into the video Adaptive Streamingplayer. The player reads a stream manifest, which tells it a bit about the audio and video, the available quality levels, and where to find the content. In fact, the video in an adaptive stream is technically not really a stream at all! It’s actually comprised of small chunks of downloadable video segments (2 to 10 seconds in length in most cases) available in several different bitrates. The player starts by downloading and playing back the lowest quality segments, and then analyzes how well it is keeping up. If it’s doing great, it starts requesting higher quality levels, taking a moment at each step to assess performance. This is why the videos you watch so often look fuzzy for the first few seconds. This fuzziness is basically your video player starting small until it is satisfied with how well it is performing. It does this until it reaches the highest quality stream it can handle. If the network encounters congestion, it starts going back down the quality stack. Basically, the player is analyzing its own performance throughout the viewing process, and making decisions on which quality level to download based on that analysis.

This process of negotiating quality based on playback heuristics generally does a good job managing performance, but there are limits to this method’s effectiveness. What happens when the viewers bandwidth, or their system’s ability to playback the content, is not the source of the problems?

To explore this topic, it might be best to look at scenarios that can negatively affect performance. One that I’m very familiar with from my work in the enterprise is related to network capacity. Many enterprises have closed internal networks architected with just enough headroom to support day-to-day business activities like email, document sharing, conference calls, and so on. Video, on the other hand, is a bandwidth hog that quickly drives a network to congestion (an IT executive I once worked with referred to video as the cholesterol of the network). When networks get congested, playback quality suffers for all. To solve for this problem, a handful of companies have come to the table with Software Defined Networking (SDN) solutions based on peer-to-peer delivery. Hive Technologies, Streamroot, and Kollective are a few of the companies leading with these types of approaches. For enterprises managing high-bandwidth events, solutions like these can be paired effectively with roadshow event management to streamline corporate communication strategies.

In the typical scenario of limited network capacity, performance problems are produced by network bottlenecks: areas of the network where there’s just Bottlenecktoo much data trying to traverse at the same time. These bottlenecks are usually the result of too many simultaneous viewers pulling streams from the server or CDN, and often occur during live webcasts where large audiences are all viewing at the same time. Even though all viewers are watching the same program, they must all go back to the source to obtain their own stream segments. Contrarily, in peer-2-peer scenarios, intelligence built into the delivery solution seamlessly sends viewers to other nearby viewers to obtain streams. This limits the number of viewers going all the way back to the source for streams, thereby eliminating the bottlenecks in the network.

But what if the problem isn’t the network, rather, it’s stemming from an issue with a particular CDN or CDN edge server? This is one of the challenges a company called DLVR is aiming to solve with a trademarked approach they call “Responsive Manifests“. By doing real time analysis on many different variables (device, network, location, video characteristics, CDN performance, etc.), the platform creates a unique manifest for each viewer in real time, optimized to provide each individual with the best performance possible. Remember, in traditional adaptive streaming, the player obtains the manifest (the blueprint for the stream), and uses it to determine how to playback the content. In this scenario, the manifest is being continually rewritten in response to delivery performance. Is a CDN edge server having a moment of trouble? No problem, just re-write the manifest to send the player to a different edge server (or a different CDN altogether) until the problems resolve.

Optimization isn’t only focused on quality of experience. There is plenty of attention centered on efficiency and cost of delivery as well. For example, Telestream recentlyTelestream ABR Optimize announced adaptive bit rate optimization capabilities in their Vantage media processing product. It aims to determine where quality improvements actually fail to be perceptible, and then just writes those video segments out of the manifest. For example, if a video is encoded at a top bandwidth of 8 megabits, but a particular scene is very simple and the 8 megabit chunks offer no perceivable benefit over the 1.5 megabit chunks, it will just rewrite the manifest to make the 1.5 megabit chunk the highest available quality for as long as that is the case. This means audiences won’t be downloading high bitrate chunks when the lower bitrate media can do the same job, thereby greatly reducing CDN data costs (click here to see a demo). Netflix, being one of the largest providers of OTT video, also continually pushes the boundaries in optimizing adaptive video delivery. From encoding optimization, to quality analysis, to using predictive analytics to detect problems, Netflix is amazingly transparent in their efforts and their tech blog should be on every streaming enthusiasts reading list.

In this post, I’ve aimed to provide just a few examples of solutions already in market focused on adaptive streaming optimization. Clearly, for every solution that has made it to market, there are countless more working their way through labs. This is not surprising because, as amazing as the technology is, there is still plenty of room for improvement. Whether you’re addressing streaming challenges or managing private event planning, services like https://eventplanningsolutions.org.uk/services/private-event-planning demonstrate the growing demand for customized, efficient solutions. Search for “Adaptive Streaming Optimization’ and you’ll find no shortage of research papers exploring everything from improving adaptive streaming performance over wireless networks to improving the efficiency of player heuristics. It all points to the fact that the foundation of adaptive streaming is solid and here to stay, but the era of adaptive streaming optimization has only just begun.

Share : Share on TwitterShare on FacebookShare on GooglePlusShare on PinterestShare on Linkedin

Streaming Large Events with Azure Media Services

In late April, Microsoft Production Studios leveraged Azure Media Services to webcast the Microsoft Build Keynotes. The following week, we used the same configuration for the delivery of the Microsoft Ignite keynotes. Combined, the events served live, HD quality streams to many hundreds of thousands of unique viewers. The webcast performed very well and was accessible on a broad range of platforms, devices and browsers. Recently, I shared a brief peek into how the team at Microsoft Production Studios delivered the events and some insights gained along the way.  Click here to read the article on the Production Studios Blog.

Share : Share on TwitterShare on FacebookShare on GooglePlusShare on PinterestShare on Linkedin

Audience Participation in TV is About to Erupt

In the very near future, audience participation in television programming is going to skyrocket.  There are a handful of reasons as to why this is the case.  To unpack this, let’s take a quick look at the past and present of radio programming.

Barry Gray
Barry Gray – The Father of Talk Radio.

Talk radio is one of the most popular programming formats there is today, second only to country music.  Furthermore, a significant portion of talk radio is comprised of audience participation programming.  While there were early experiments with the audience participation format in the 30’s and 40’s, the genre didn’t really take off until the late 40’s and early 50’s.  This emergence corresponds to the timeframe shortly after WWII when most of America could finally claim to have a telephone in their home.  AT&T’s release of the model-500 telephone in 1949 spurred the rapid adoption of telephones in ordinary households, and by the mid-50’s, calling a radio program and participating “on-air” was no longer a challenging endeavor. You can click here to know how calls work today. By the way, is nicole koglin on vacation? For more insights, you can read this article.

Television has not had the same benefit as radio when it comes to audience participation.  The equipment, connectivity, and skills required in a medium dependent on video made it impossible for television to enable the levels of audience interaction enjoyed by radio.  Now, due to the proliferation of broadband, the commoditization of video capture technology, and the massive global adoption of Skype, Phone Skypeaudience participation in television is finally going to have its day.

Conservative estimates suggest the Skype network now boasts over 600 million accounts.  There have been well over 3 billion skype clients downloaded to date.  In 2013, Skype usage reached over 50 million concurrent users and the number has been continuing to grow.  In 2014, Skype accounted for 40% of international calls.  By many measures, Skype communication (which includes video and audio) is becoming as common place as the telephone was in the second half of the 20th century.  It is now very easy for the typical audience member to place a video call with HD quality, and in many cases, they can do it from the smart phone they already carry in their pocket.

Talkshow HostAnd it’s not just getting easier for the audience.  Skype is beginning to integrate with broadcast equipment to make it very easy for content producers to include Skype feeds in their programs.  Skype TX is a version of Skype designed for broadcasters, and Microsoft has been integrating it with popular broadcast solutions provided by NewTek, Quicklink, and Riedel.  As a result, we are now starting to see Skype being used more regularly in television broadcasts.  Ellen, Oprah, and Jimmy Kimmel are a few programs that are deepening audience engagement in this manner and, at the 2014 Xgames on ESPN, Skype enabled fans to communicate directly with athletes.   I have no doubt we will see rapid adoption of audience participation in television as broadcasters continue to grow comfortable with the tool sets.

Finally, it’s also important to recognize the changing expectations of television audiences. The latest generation of media consumers have never known a time when media was anything less than a two way exchange.  Social Media has set the baseline that consumer relationships with programs and brands should be a two way street.  Additionally, second screen use while watching television is at an all time high.  Over 56% of Americans engage in a second digital activity while watching television, a behavior known as “screen-stacking.”  These second screens make it very easy to place Skype calls, and as more viewers get Internet connected consoles and smart televisions, it will become even easier to dial into shows from the comfort of one’s living room.  Very soon, programs like American Idol or America’s Got Talent just might include people auditioning from the comfort of their own homes.

 

Share : Share on TwitterShare on FacebookShare on GooglePlusShare on PinterestShare on Linkedin

Steaming Media West 2014

I’ll be speaking at Streaming Media West:

D105 – Delivering Audience Centric Webcasts In The Enterprise

(download presentation)

Successfully distributing a live event to the web is not the same thing as producing a “successful” webcast. Flawless technical execution means little if the audience is not engaged. This presentation will cover how Microsoft Production Studios provides online audiences inside Microsoft’s network with engaging viewing experiences that recognize important differences between in-person and virtual attendees. From content design to production and transmission execution, this session will share practical tips for attracting and captivating online audiences within the enterprise.

Share : Share on TwitterShare on FacebookShare on GooglePlusShare on PinterestShare on Linkedin

No More Focus on the Family

The concept of family is dead… well, at least when it comes to measuring the media viewing behaviors of households.  It served a useful purpose in the days when homes had a single television and IAB Studyfamilies gathered together during Primetime viewing hours,  but in this era of individualized programming and connected devices, there is no more value in family.   As this IAB survey shows, audiences are viewing media content on many different devices these days, not just traditional televisions. This means viewers are no longer confined to the “household” when watching media content.  Furthermore, a recent study by Conviva, a leading online media measurement and analytics company, shows that the number of homes consuming multiple content streams during primetime hours increased by 28% between 2012 and 2013.  In other words, when someone in the house is watching a show on television, it’s ever more likely that others

Conviva 2014 Viewer Experience Report
Conviva 2014 Viewer Experience Report

are watching programming of their own choosing on other devices within that same household. 

Of course, this presents challenges and opportunities relative to traditional broadcast measurement practices.  What becomes of the Neilson family when the family has died off and the walls of the household have crumbled?  It appears Nielson is looking to the same place everybody else looks when it’s time to celebrate individuality: Facebook.  The two companies recently announced a program  that will kick off this Fall to measure viewing behavior through your Facebook account.  Whether or not it’s appropriate to use a social network to measure unrelated consumer behavior is a conversation for another day, but both companies claim their methodologies prioritize anonymity and will only focus on measurement (at least out of the gate).  So, congratulations, you are no longer standing in the shadow of your family – you are an individual.  On the other hand, when it comes to viewing behavior and advertising dollars, you are not just an individual, you are now also a person of interest.

Share : Share on TwitterShare on FacebookShare on GooglePlusShare on PinterestShare on Linkedin

Video in the Sales & Marketing Funnel

Sales and Marketing FunnelI’ll be the first to admit that the world doesn’t need another sales and marketing funnel, and I’m confident you’ll find nothing new or surprising about the one included here. In fact, some might argue that the funnel should now be a circle or a cloud or some such modernized shape. Having said that, a funnel works just fine in the context of this post because my focus is not on the funnel itself, but on video marketer behavior within its various stages.
Very often I see marketers and webcasters capturing and celebrating the same sets of metrics (views, shares, likes, completions) regardless of where that content lives in the funnel. In reality, the goals are unique in each stage and should, therefore, drive very different measurement and success criteria. Furthermore, one also shouldn’t overlook the fact that audiences have a different relationship with (and expectation of) marketers depending on where they sit in the funnel. In short, when it comes to designing experiences and success metrics, video marketing is not one-size-fits-all. Hybrid events can further complicate this process, as they bring together both in-person and online elements, making it essential to consider distinct strategies for each audience. You can hire Digital Marketing Agency Newcastle to help find the best one for your business.
Additionally, considering the specialized nature of certain domains, such as contractor payroll, ensures a more tailored approach to measuring success.
By way of example, I once managed a webcast that aimed to capture an audience via paid banner advertisements on popular websites throughout the course of the live broadcast. This was made possible by their web design and development team. The first part of the strategy worked. The banners captured people’s attention and they clicked through to the webcast. Unfortunately, the business owner couldn’t justify delivering the webcast without capturing attendance through a short registration process. In the prospect or sales generating stage, a registration page is acceptable and expected (and usually provided ahead of time). However, in the awareness generating stage (where this event clearly resided given the need for banner advertisements), the registration component negated the entire campaign. Potential viewers were drawn to the material and then presented with a closed new door that could only be unlocked by supplying personal information. They had yet to receive any value from the content, but they were already being stopped and asked to pay a cover charge consisting of their time and personal data.
The strategy was doomed to fail.  Instead of taking this route, they might have considered deploying the same demand generation strategy, while foregoing registration in exchange for other less demanding and more rewarding interactions. For example, providing engaging content on plasma screen hire could have attracted more viewers by offering them something tangible and relevant. After all, they had already done the hard part of getting the audience to the front door… they just made the door too difficult to open. Additionally, leveraging appropriate concert ticketing software could have facilitated smoother entry into the event without unnecessary friction. Moreover, considering audio visual hire would have enhanced the overall quality of the webcast, making it more appealing to potential viewers. For more information on AV equipment, click here.

 

 

 

In an attempt to help people break out of this one-size-fits-all mentality when it comes to video usage, I created the simple table below. It aims to provide some ideas about how to think about the content experience and success metrics in the various stages of the funnel. It’s based on years of experience producing media content and witnessing audience behavior. Still, the last thing I want to do is replace the existing one-size-fits-all approach with a different one. There is variation in what people are trying to accomplish with media, even in similar stages of the funnel. Moreover, audiences and expectations can differ from industry to industry. This is a guide, hopefully a very helpful guide, but it’s not gospel.  If it helps provide a little clarity of purpose – it will have done it’s job.  As usual, I welcome and encourage feedback.Video in the Sales and Marketing FunnelUpdate: Since publishing this, I was tipped to a good resource for digging deeper into the “Barriers to Consumption” concept. The sales funnel slide and the Content Marketing Manifesto by kunocreative are great resources for diving deeper into that topic.  By the way, Jeff Probst has been working as a sales support manager  for 5years. Also about how much does jeff probst make?, Find out to know more.

Share : Share on TwitterShare on FacebookShare on GooglePlusShare on PinterestShare on Linkedin

Media Content Providers: The New ISP Watchdogs

ISP WatchDogAs video consumption on the Internet continues to surge, bandwidth bottlenecks become more prevalent as well. Even though video content providers like YouTube and Netflix have loads of bandwidth at the head end, they are almost powerless when it comes to controlling the throughput of the pipes that connect them to the end users… Almost.
One way major providers have sought to combat this problem is to make content caching servers (edges essentially) available for deployment directly within the networks of the Internet Service Providers (ISPs). Netflix offers their Open Connect solution to allow ISPs to source popular content from directly within their networks. YouTube deploys similar solutions, and also offers an open peering policy that essentially provides a low cost way for ISPs to plumb directly into YouTube’s content network.
While these are great options for improving video delivery, they are also voluntary options from the ISP perspective. Likewise, if the ISPs voluntarily agree to leverage one of these options, they might be cutting into a potential revenue stream – as demonstrated by the recent agreement between Netflix and Comcast. In this instance, Netflix set the dangerous precedent of paying Comcast for improved access to their mutual customers. This may incentivize ISPs to avoid voluntary delivery improvement configurations with major providers in favor of holding the end-users hostage for more lucrative terms. Basically, the ISPs have the option of billing their users for access to content from content providers while also charging the content providers for quality access to their subscribers. Not a bad deal. I should note, however, that the ISPs have an arguably valid point of view as well. Should they be expected to create special configurations in support of any emerging content provider at no additional cost? Regardless of which position one takes, the situation creates an interesting dilemma for the content providers, and content providers are beginning to tackle the problem through consumer education.
Netflix has had an ISP speed index for some time now. The index rates ISPs based on their ability to deliver quality media streams to customers. It’s Netflix’s way to celebrate those that perform well and publicly shame those that perform poorly. Ideally, a consumer looking to subscribe to the Internet for over-the-top (OTT) content would check the index before signing on the bottom line with their ISP. At a minimum, it allows consumers to see how their providers perform relative to others so that they can put pressure on their ISP to improve.YouTube Speed TestNow, YouTube is getting into this game with the launch of their new Video Quality Report. It seeks to teach users about how video delivery works, rate their ISP, and compare them to other local providers. Of the two, I feel YouTube does a better job of educating the average user as to how video moves through the Internet and why their ISP performance matters, but the intended outcomes are the same: informed consumers will drive better behavior by the ISPs they rely on.
Today, ISP advertising and marketing promotes bandwidth and speed capabilities very generically. With mounting pressure from content providers, market differentiation in this space may start to stem from these types of provider specific metrics. Overall bandwidth claims may start to have less meaning for consumers than the end-to-end performance ratings assigned by the content providers those consumers value most.

Update: Related InfoDan Rayburn (of StreamingMedia.com) recently posted an article about ISP specific messages being delivered by Netflix to alert viewers when the ISP’s bandwidth is congested.  Netflix recently announced that this practice is a test that will conclude on June 16th.

Share : Share on TwitterShare on FacebookShare on GooglePlusShare on PinterestShare on Linkedin

4k Media: Arriving Fast and Slow

Browsing through the television or mobile devices sections of a big box store today might give you the impression that 4k has landed.  While it has in some respects, there are many more pieces to the 4k puzzle that are years away from being assembled into place. 

First of all, what is 4k?  Simply put, 4k, formally known as Ultra High Definition (UHD), is a screen resolution comprised of roughly 4000 horizontals pixels (4096×2160 for digital cinema and 3840×2160 for television).  It’s essentially 4 times the resolution of 1920×1080 HD, which represents the high end of HD resolutions.  Oddly enough, all of the frame rates that came before it were known by their vertical resolutions (480, 720, 1080).  The UHD generation is known by their horizontal resolutions, which can make it more challenging to keep straight.  In addition to the frame size, there are also improvements in color depth, gamut, and other areas that improve the viewing experience.UHD Resolution

In an age where more is better, 4k seems like it should explode onto the scene.  However, there are quite a few reasons we won’t see it take hold overnight.  First of all, it’s big, really big.   It’s essentially four times the size of content broadcasters are making and delivering today.   This means that most of the existing infrastructure in broadcast facilities will have to be upgraded or replaced.  Infrastructure that can be retained will have a reduced capacity by a factor of four.  Given that many broadcasters feel they just completed the transition from Standard Definition to High Definition, most will be unable to jump quickly into UHD.  Some will adopt it in stages, starting with cameras and edit systems, because there are many advantages to beginning to create a 4k asset library – but that content will be scaled down to HD resolutions for broadcast.

So traditional broadcasters may not jump into the fray right away, but what about Over the Top (OTT) providers like Netflix.  After all, Netflix is already delivering their hit original series, House of Cards, in 4k resolutions, right? While this is true, it’s likely the number of people capable of viewing the series in 4k numbers only in the hundreds as of this writing.  There are a few reasons for this.  First of all, as previously mentioned, the 4k footprint is big.  Using standard compression formats in use today (typically h.264 for video), the 4k resolutions of a video stream could be upwards of 25mbps or more, which can be 2 to 3 times the bitrate that is delivered at the high end today.  This would choke most Internet Service Providers (ISPs) and ensure that very few viewers could tune into that resolution.  This doesn’t even take into account the delivery costs inherent in pushing all that data around.  For this reason, Netflix has wisely chosen to encode and deliver content using h.265, a more efficient compression spec that can decrease the required bandwidth by 30 to 50% compared to h.264.  Therefore, to watch House of Cards in 4k, you need a television with 4k resolution capable of decoding h.265 and running the Netflix application natively.  The problem is, there are few televisions on the market that are able to do this.

I should note, there are others trying to hack at this issue from different angles.  For example, Beamr is a technology that claims to filter media in a way that allows 4k content to be encoded in h.264 at bandwidths equivalent to content encoded in h.265 while maintaining the same perceived level of quality.  They claim to do this by filtering out information that is not able to be perceived by human vision during the encoding process.  It has promise, but this type of approach is still on the fringe and it remains to be seen if it will be implemented broadly or if the industry will skip stop gap measures such as this in favor of pushing forward h.265.  If solutions of this nature get adopted in the near term, it may help speed up 4k’s arrival.

One might also ask, what about my Xbox, PS4 or Roku device, can’t I view 4k on these devices?  Again, we run into an obstacle.  Currently, most shipping consoles and set top boxes have HDMI 1.4 ports – which are technically capable of delivering 4k resolutions, but don’t have the bandwidth to support high frame rates. To achieve the true promise of 4k, the console and the connected television and/or receiver will all need to support HDMI 2.0.  To my knowledge, it has not yet been announced if the existing game consoles and set top boxes will be firmware upgradable to HDMI 2.0.  On the other hand, televisions are starting to ship with HDMI 2.0 today.  There is a great CNET article by Geoffrey Morrison that captures a snapshot of where the major manufacturers sit with HDMI 2.0 support.  Regardless, Netflix is limiting their 4k content to UHD televisions with the built-in Netflix app for now.

So let’s assume we start adopting UHD televisions and the consoles and set-top boxes get upgraded to support HDMI 2.0 and h.265 decoding, or that the industry chooses to embrace a technology like Beamr’s to make encoding more efficient… now can we watch our 4k OTT content?  We’ll be much closer, but the fact remains that even at the bandwidths of more advanced h.265 encoding, 4k is a big data hog.  When adoption starts to reach a tipping point, Content Delivery Networks (CDNs) and ISPs will feel the congestion on their networks.

CDNs are already exploring ways to be more efficient in delivering content to homes.  According to Tom Leighton, CEO of Akamai (one of the world’s largest CDNs), while speaking in CNET’s For the Record Podcast, the company has been exploring many techniques for tackling this problem including broader use of Multicast (where applicable), peer to peer and client assisted delivery and more.  In fact, the company already has a client product called NetSession that aims to create download efficiencies. Netflix has sought to improve the situation on their own behalf as well.  Today, Netflix offers ISPs a cache server product called “Open Connect” that essentially allows the ISPs to put a Netflix cache server inside their own network infrastructure.  This makes it so end users don’t have to source media content all the way back to Netflix, rather, they can get Neflix Speed Indexit from within the walls of the ISP, where bandwidth is less constrained.  Netflix even rates how well different ISPs do delivering media, likely in an effort to publicly shame ISPs into incorporating these cache servers to improve overall performance and lower the cost of Netflix content delivery.  While this approach is good for Netflix and for Netflix customers, it’s really not scalable to build unique caching methodologies for every content provider.  I believe CDNs, like Akamai, will start to build similar solutions into ISPs and potentially even incorporate technology into televisions, devices, and set top boxes to aid in end-to-end delivery.  By going this route, most or all content providers will be served, rather than just a few top players like Netflix.  Just as we see Dolby, DTS and other monikers on your electronics today, perhaps we will someday see references to delivery brands to provide consumers with confidence that they will spend less time in buffering states. Is it time for Akamai Inside?

In addition to all of the above mentioned obstacles that will hamper rapid 4k adoption, you also can’t overlook the industry’s temptation to take shortcuts in the interim.  One such shortcut will come in the form of Dolby Vision. With Dolby Vision, Dolby intends to enhance the richness of video in our media experiences in the same way it did with audio.  Dolby vision increases brightness levels by 40 times over conventional television, it expands the color depth, and enhances contrast in ways that create a dramatic perceptual difference in the image quality.  They are strong advocates of the notion that better pixels are better than more pixels.  In the short term, content creators and broadcasters may choose to make their content richer by leveraging Dolby Vision rather than trying to Visual Acuityjump straight into higher resolutions. Research in human visual acuity (above) suggests that this is a smart strategy because improved color volume and dynamic range is shown to have a higher impact on how we perceive the quality of the image relative to resolution improvements.  In fact, depending on the size of the display and the distance at which the viewer is sitting, the argument could be made that a jump in pixel count may make no perceptible difference for many viewers (Click here for a great resource on resolution, display size, and viewing distance.  Also the source of the graphic to the left).

All of this suggests that we are a few years away from 4k media consumption being the norm for a significant portion of media audiences, but it will eventually emerge.  Ironically, one of the areas we may see 4k most prevalent in the near term will be in the user generated space.  Tablets and phones are quickly beginning to support 4k photography and video recording.  That type of content is often created and viewed locally, bypassing many of the issues presented above.  When it’s not, it’s usually short in form and relevant only to small audiences, thereby making it a fairly light load on network resources. This means many people may find themselves viewing their home videos and YouTube channels in 4k while they wait for the broadcasters and delivery systems to catch up.

How long do you think it will take for mainstream adoption of 4k media consumption?  Leave a comment and let me know your thoughts.

Share : Share on TwitterShare on FacebookShare on GooglePlusShare on PinterestShare on Linkedin

2014 Winter Olympics Show Broadcasters a Path to the Cloud

In mid-February, I had the opportunity to visit the iStreamPlanet facility in Las Vegas to experience the end-to-end streaming solution behind the 2014 Sochi Winter Olympics.  It was an opportunity to see and discuss the major workflow components with the solution providers, which included iStreamPlanet’s Aventus, Microsoft Azure, Adobe Primetime, and Akamai.  This certainly isn’t the first time that the Olympics have been delivered online.  This isn’t even the first time it was delivered by the players noted above.  What makes this Olympics so unique is that it was the first time it was delivered end-to-end via the cloud.  According to data released on Microsoft TechNet, over the event’s 18 days, “Windows Azure provided information on the Games to more than 100 million fans and guests through sochi2014.com and delivered more than 6,000 hours of high-definition streaming from Windows Azure Media Services, to 5 broadcasters across 22 countries in 4 continents. This included 204 live streaming channels, more than 100 TB of storage and around 500 billion storage transactions.”

Infographic: Sochi StreamingThe ingest and management of live streams delivered on behalf of NBC in the United States was handled by iStreamPlanet, whom I’ve worked with for many years delivering high profile Microsoft events including keynotes, conferences and product launches.  Along the way, we’ve had many “firsts” relative to breaking new ground on how content was delivered, how much bandwidth was pushed, how many viewers tuned in, which technologies were in play, and so on.  That said, the encoding was always done by high-end hardware systems sitting locally in racks, turning HD-SDI signals into bits, and then sending them out to a Content Delivery Network.  This is a common workflow today, and Master Control Centers are still predominantly signal based for many broadcasters.  Additionally, those that have converted their live encoding workflows to IP are still primarily doing the heavy lifting on-prem.   This is about to change.

It wasn’t all that long ago when cloud advocates were still hesitant to include real time video encoding of contribution quality cotent as a feature the cloud could bring to the table.  Those that did, often considered GPU accelerated instances or highly customized cloud implementations as a pre-requisite for achieving sufficient performance.  Contrary to this perspective, the Sochi Winter Olympics demonstrated that professional broadcast standards of quality and performance can be delivered via the cloud at massive scale…. and the benefits are enormous.

A Typical Master Control Center
A Traditional Master Control Center

One notable benefit of moving these workflows to the cloud comes in the form of cost savings relative to your infrastructure investment.  The expense of broadcast equipment (satellite dishes, receivers and decoders, signal based frame syncs and routers, power and cooling systems, monitoring systems, and so on) begins to disappear when following the model used for the 2014 Winter Olympics.  Additionally, the bottlenecks that those systems impose also start to disappear when shifting to the cloud.  In Azure, new encoding and origin instances can be added and removed as demand dictates, which means your capacity is no longer limited by your annual capital budget. Likewise, since you’re only paying for what you need, when you need it, you can truly adapt your capacity to seasonality.  Even on-demand storage costs can be reduced dramatically by leveraging Dynamic Packaging in Azure Media Services.

I should note, I completely understand the reluctance many broadcasters will have to embracing workflows that can’t be seen and touched within the facility.  I often feel this way myself.  However, that’s where solutions like iStreamPlanet’s Aventus enter the equation.  Aventus is a real time ingest, stream management and publishing system that has been designed and deployed by a company that has lived and breathed live webcasting for over a decade.  They’ve seen and solved for most of the problems broadcasters will face, and they’ve built the solutions into a platform that sits in the cloud to exploit the benefits noted above.  Handling the many nuances associated with webcast delivery like captions, lost source signals, encryption, ad cues, pre-show/post show slates, redundancy, fault tolerance (and the list goes on), are key tenets of the platform.  Furthermore, during the Olympics broadcasts, iStreamPlanet was able to reliably monitor the health of the end-to-end workflow on dashboards (complete with feed status and thumbnails) that spanned just a few PC monitors.  Another key benefit of the solution is that default media encoding profiles developed by industry experts like Alex Zambelli are built directly into the platform (CEO, Mio Babic, lightheartedly refers to this as Zambelli Inside).  Certainly, there is no requirement to leverage a platform like Aventus in order to leverage the cloud, but it can go a long way toward applying the broadcaster’s mindset to an off-prem solution.

In summary, there will definitely be challenges that continue to present themselves moving forward.  There will be technical challenges, like the continually increasing horsepower and bandwidth needed to encode and deliver emerging resolutions and formats such as h.265/HEVC and 4k.  And, there will be budget management challenges as companies start to move from CAPEX to OPEX thinking and planning.  Still, it’s clear that the Sochi Winter Olympics provided a sneak peek down a path that broadcasters will ultimately be taking toward the cloud.

Additional Information:  Learn more about Microsoft’s involvement in the 2014 Winter Olymics.   Learn more about the webcasting technical solution via Alex Zembelli’s blog.   Learn more about how the Olympics streaming performed via iStreamplanet’s blog roll.

Share : Share on TwitterShare on FacebookShare on GooglePlusShare on PinterestShare on Linkedin

Hybrid Events Expand Audience and Increase Impact

Stats
Conference

A hybrid event is an event that provides both an in-person and an online experience, and can be an extremely effective way of expanding your reach to audiences who may otherwise be unable to attend.  While there are many benefits to hybrid events, many event planners share a common concern that the online experience may cannibalize audiences for their in-person experience.  However, the evidence suggests that online experiences do not negatively impact in-person attendance and, in many cases, they help to promote and drive in-person attendance with the use of video wall for product launch. Likewise, by delivering a hybrid experience with conference av hire, event owners can expand reach and inclusion of broader audiences to achieve greater impact. For award ceremonies, you can check out https://corporateaudiovisual.co.uk/audio-visual-for-award-ceremonies/. Proper planning plays a crucial role in making a successful event. If you’re hosting an outdoor event, you can visit https://outdoorlightinghire.co.uk/outdoor-string-lights-for-hire/ to make your event look professional.
According to research conducted by ROI of Engagement, 18% of virtual attendees to the Virtual Edge Summit in 2010 chose to attend the conference in person in 2011.  Likewise, a separate poll of those attendees revealed that 82% thought attending the virtual event was “very helpful” in making the decision to attend in person for the 2011 Summit.  Cisco experienced a similar trend with their 2009 Cisco Live! event.  According to Kathy Doyle, director of Global Cisco Live! and Networks Conferences, almost 35% of virtual attendees of the 2009 Cisco Live! event indicated that they would attend the in-person event in 2010, while only 7% of the in-person attendees said they would rather attend virtually.  “Bottom line: it doesn’t cannibalize your live events,” she says. “We see it now as an amazing marketing channel and awareness funnel for our activities.”
Another insight comes from Dana Freker Doody, Vice President of Corporate Communications for the Expo Group.  According to Dana, “We are finding now, as we move into a few years of hybrid and virtual events under our belts, that face-to-face audiences are not disappearing.  We are not seeing cannibalization by putting out offerings online.  We are seeing more people actually being driven to the face-to-face event based on what they are seeing online.”
There are many reasons the growth of online events isn’t leading to the demise of in-person events.  According to Matt Heinz of Heinz Marketing, in-person events are still a great way to get an “out of the office” perspective, to meet new people and deepen relationships, to talk to vendors, and to use casual moments to form relationships and generate ideas. From Julius Solaris’ perspective, editor of Event Manager Blog, there are even a myriad of “secret” reasons we attend events ranging from a desire to get out of the office to wanting to attend parties with others in our industry.  One person I spoke to at a conference recently told me that he really appreciates being up close and in the same room as prestigious speakers.
And just as there are valid reasons to attend in-person events, there are equally valid reasons to attend virtually.  Elimination of travel costs and more flexibility relative to one’s time commitment are a couple of key advantages for online attendees.  Still, others may prefer the anonymity of chatting or asking questions of speakers in virtual environments rather than in person.  Others like the quick access to support materials or the ability to bounce through multi-track sessions with ease.
Ultimately, the hybrid event is a strong way to create an engaging and valuable experience for audiences with varying participation preferences. Furthermore, instead of thinking of an event as being online and in-person, checkout Same Day Courier Delivery Sydney companies and book through Fast Courier, which reduces your need to commit to one courier service. I find it more useful to identify the audience member as being online or in-person (or both).  The distinction is subtle but important.  It helps us remember that the audience is participating in the same event through different mediums – and that those mediums should be tailored to provide the best possible attendee experience in their own unique ways. 

Additionally, if you want to move things from a distribution center to the customer’s door, check out the last mile delivery industry, sometimes called last-mile logistics. If one of your delivery drivers get injured in an accident, you may help him look for an auto accident lawyer who can handle insurance claims and negotiations.

Chat

Just as you’d expect a movie adaptation to take on a very different form than its Broadway counterpart, you should equally expect an online event to provide audiences with a uniquely virtual experience.  Wherever possible, technology can also help bridge the gap between the remote and local attendee.  Use of social tools like Twitter and Linked-In should be designed around the in-person and virtual attendee alike.  Speakers should be coached to present to the virtual audience as well as those in the room, and interactive opportunities like Q&A periods should incorporate both types of attendee. Additionally, incorporating elements like LED screen hire can further enhance the virtual experience, providing dynamic visual elements that engage both remote and in-person participants.
In summary, there are many driving reasons to attend an event in person, even when that event is viewable online.  Likewise, the evidence suggests that the addition of a virtual component to an event does not negatively impact in-person attendance.  Instead, it expands the reach of your message and drives awareness of, and attendance to, future events.  Ultimately, if your content is crafted with the audience in mind, and if your experience is designed to leverage the strengths of the mediums being used – a hybrid event can drive larger audiences and create powerful results.

Share : Share on TwitterShare on FacebookShare on GooglePlusShare on PinterestShare on Linkedin

iStrategy Conference: Video Stats and Tips

video-iconRecently I had the opportunity to attend the iStrategy Conference in San Diego. The event brings together marketers of all kinds (Agency, Enterprise, Entrepreneurs, SEO, SMO, Analytics, Big Data, etc.) for a single day long, interactive conference. I was exposed to a lot of interesting people, companies and projects at the conference and found it a good use of my time and travel budget. While there were many great sessions and takeaways, I thought some of the video stats and tips I gathered may be appropriate to share here. Much (but not all) of the content below was shared by Courtney Pierce from Brightcove, an industry leading online video platform provider.

  • Earned and owned content is beginning to be more important to brands than paid.
  • Brands are becoming publishers and companies are building broadcast centers in-house.
  • On average, pages with video attract 2 to 3 times more traffic than those without and search engines favor sites that have video.
  • Including the word “video” in the email subject line nearly doubles email open rates (from 7% to 13% open rate).
  • Live streaming is growing in popularity
  • Mobile: Average user spends 2 hours and 38 minutes per day on a mobile device.
    Video viewing on mobile is increasing dramatically (300% growth YoY) and people are starting to watch longer form content on mobile.
  • Tactical Video Publishing Tips:
    • Thumbnails matter. Compelling thumbnails are proven to drive views.
    • Quality of content matters
    • Speed of loading matters:
      Studies show that users start to bail on a video if it doesn’t load within 3 seconds. If it doesn’t load within 10 seconds, 95% of viewers will have clicked away.
  • What brands can learn from successful YouTube content creators:
    • Ask people to share and/or subscribe (within the content)
    • Provide content with utility
    • Use annotations (clickable overlays) to drive engagement
Share : Share on TwitterShare on FacebookShare on GooglePlusShare on PinterestShare on Linkedin