Eindhoven, Netherlands
Eindhoven, Netherlands

Civolution is a provider of technology and services for identifying, managing and monetizing audio and video media content. The company offers a portfolio of proprietary and patented digital watermarking and digital audio and video fingerprinting technology solutions for media protection: forensic tracking of media assets in pre-release, digital cinema, Pay Television and online; media intelligence: , broadcast monitoring, internet and radio tracking; media interaction: Automatic Content Recognition and triggering for second screen and . Wikipedia.

Time filter

Source Type

Civolution | Date: 2013-07-15

A computer-implemented method of automatically adding an identifier related to a content item to a communication in a multi-user communication network such as a social network. The method comprising obtaining a robust fingerprint of the content item, retrieving the identifier from a database using the robust fingerprint, and adding the identifier formatted in a format suitable for the multi-user communication network to the communication. Preferably the robust fingerprint relates to a particular timepoint in the content item and the identifier relates to an aspect of the content item at the particular timepoint. Also a system and computer program product.

In a method of distributing content in plural fragments, each fragment being no longer than a given maximum fragment length, a method of watermarking the content prior to fragmenting the content into the plural fragments, comprising water-marking a given payload symbol from a given alphabet in a given segment of the content, and treating a segment prior to or following the given segment as an intermediary segment, the length of this intermediary segment being substantially equal to or greater than the maximum fragment length.

Civolution | Date: 2012-10-09

A method for detecting a payload embedded using watermarking in a content stream, the payload being different in a first and a second segment of the content stream, a payload in the second segment having a predetermined relationship with a payload in the first segment, is described. The method selects a point in the content stream where the first segment is likely to end and the second segment to being, samples the stream to obtain a first set of samples that is before the chosen point and a second set of samples that is after the chosen point, and detects the payload on a combination of the first set and a transformation of the second set, where the transformation is based on the assumption that the second set is from the second segment and exploits the relationship that exists between the payloads in the first and second segments.

Civolution | Date: 2013-03-18

A method of embedding a pattern as a watermark into a content segment. Prior to modifying the content segment, an impulse response of a filter to be used for detecting the pattern is determined; the time-reversed impulse response of the filter is inserted into the segment as the set of imperceptible features; wherein the filter is an infinite impulse response filter having a semi-white frequency spectrum and provides a pseudo-random time-domain response.

Civolution | Date: 2013-06-12

A method of watermarking a video signal includes encoding the video signal using at least one encoding parameter that is time-varied according to a watermarking pattern. The parameter affects information lost while encoding the signal. The parameter may be a quantization factor corresponding to a particular coefficient of an encoding transform. The parameter may be an element of a quantization matrix corresponding to a particular coefficient in a block DCT transform. The method may be implemented in devices with limited processing resources by means of a software update. The method enables the devices to imprint an encoded signal with a robust watermark, which may survive subsequent decompression and recompression. Alternatively, a video signal may be watermarked by modifying a magnitude of a non-dc spatial frequency component in a manner which varies with time according to a watermarking pattern. Corresponding watermark detection methods and watermarking devices also are disclosed.

The problem is solved by a method for broadcasting a broadcast signal, comprising generating an information signal having in time a first content up to a certain time and a second content after this time. The information signal is broadcast as a broadcast signal via a first communication link At least one first feature is being provided with respect to the second content, for example an advertisement. The second content is detected in the information signal using the at least one first feature. On detection of the second content in the information signal at least one second feature is extracted from the first content in the information signal preceding the second content. The at least one second feature is sent to a user device using a second communication link, different from the first communication link, the second communication link being faster than the first communication link

A video watermarking scheme is disclosed, which is designed for the digital cinema format, as it will be used on large projector screens in theaters. The watermark is designed in such a way that it has minimal impact on the video quality, but is still detectable after capture with a handheld camera and conversion to, for instance, VHS, CD-Video or DVD format. The proposed watermarking system only exploits the temporal axis. This makes it invulnerable to geometrical distortions generally caused by such a way of capturing. The watermark is embedded by modulating a global property of the frames (e.g. the mean luminance) in accordance with the samples of the watermark. The embedding depth is preferably locally adapted within each frame to local statistics of the respective image. Watermark detection is performed by correlating the watermark sequence with extracted mean luminance values of a sequence of frames.

Civolution | Date: 2012-06-20

A device for rendering content from a first source comprising a first input for receiving the content from the first source, a second input for receiving a substitution content item from a second source, a substitution module for substituting a segment of the content with the substitution content item, and rendering means for rendering the content wherein the segment is substituted with the substitution content item. The rendering device has monitoring module for monitoring the reception of the segment, and controlling the substitution module dependent on whether the segment is being received, such that the substitution module ceases the substitution upon failure to receive the segment.

News Article | April 4, 2012
Site: thenextweb.com

Twentieth Century Fox has announced that it’s to start using Civolution‘s media-monitoring technology to monitor the broadcast of its video content around the globe. Civolution’s technology helps companies monitor, manage and, subsequently, monetize their media content, through its Teletrax service. Twentieth Century Fox, a subsidiary of News Corporation, is one of the six major film studios, and the media intelligence provided through this tie-up will let Fox’s TV distribution arm accelerate rights windows to broadcast its content overseas, monitor promotions of its series, and develop new business models for the distribution of its video. “As the broadcast marketplace evolves into increasingly complex business models and consumer demand pushes broadcasters to make their programming available on-the-go, it is important to monitor broadcasts around the world and track, near-real-time, performance ratings as we partner with our clients to meet the new demands in their marketplace,” says Scott Gregg from Fox’s International Television Distribution. The terms of the agreement will see Fox use Teletrax to monitor telecasts of its feature films and television shows across Europe, Australasia and the Middle East. This will also help Fox compare and contrast data with different markets around the world, as international broadcasters increasingly move closer to broadcasting the same shows on the same day/date as the US network schedules. Indeed, Fox recently launched Touch starring Kiefer Sutherland, which it premiered on the same day and date in more than one hundred countries. The broadcast data provided by Teletrax will not only fulfill a need to track where its content is being broadcast, but also inform Fox where and when a broadcast is airing. “As the broadcast marketplace evolves into increasingly complex business models and consumer demand pushes broadcasters to make their programming available on-the-go, it is important to monitor broadcasts around the world and track, near-real-time, performance ratings as we partner with our clients to meet the new demands in their marketplace,” added Gregg.

News Article | July 21, 2013
Site: venturebeat.com

Freddie Laker is the founder and CEO of Guide. Temporal metadata isn’t currently being leveraged by the video industry — but it should be. No single development would be more important to the evolution of video, both online and offline, than evolving from today’s top level, directory-powered descriptions to a more robust, second-by-second metadata-driven system. This sea change will transform how users interact with digital video content across platforms, and how advertisers and content creators approach the medium. But first, a working understanding of metadata as it relates to television content is important. So what exactly is temporal metadata? It is the descriptive information about content—title, cast, release date, promotional images —  that are frequently used by programming guides. Without this embedded data, you would have no idea whether you were watching Law & Order: Criminal Intent or Law & Order: Special Victims Unit on late night cable. Let’s take it a step further and consider temporal metadata in the same television context. While TV metadata alone is useful enough, it’s also boring and basic—a digitized version of an old TV Guide issue. The same core data taxonomies for television have been used for decades. While this metadata accurately describes the content of a television episode as a whole, it doesn’t provide any details about individual scenes. What’s the name of that actor? What song is playing in the background? Where was that waterfall scene filmed? Where did that actress get her dress? This sort of scene-based data is referred to as temporal metadata; it applies not only to featured programming, but to commercials as well (if not even more so). The problem is that temporal metadata is currently not embedded — or “watermarked” — within programming, but it should be. While there is a behind-the-scenes effort within the industry to develop a common metadata format which would allow creation level tagging, the fruition of that process is probably still years in the future. In the meantime, mobile and second screen devices have provided a reliable workaround in the form of Automatic Content Recognition (ACR) technology, which identifies — or “fingerprints” — content using assorted queues, mostly audio. And let’s not stop at television. Rather, let’s bring temporal metadata to every video-based medium on and offline. As with TV, web-based video can benefit from temporal metadata’s nano-level insights. One such example dates back to 2011 when Google’s YouTube platform first started automatically adding captions to its content by using speech recognition technology across some videos. Although this clearly has benefits to web users who may be hard of hearing it, this application also clearly provides queryable metadata for the search giant to more accurately sell advertising against. I expect this to eventually play out across Google’s terrestrial video initiative Google TV as well. With this in mind, let’s look at the single most commercially important aspect of temporal metadata: advertising. More specifically, because advertisers and brands strive with every fiber of their being to be contextually relevant, the potential monetization models presented by temporal metadata and complementary second screen apps are both far reaching and high-impact. In the future, apps leveraging temporal metadata could recognize objects on the screen and instantly direct viewers to online purchasing options, product placement would become so much more powerful — completely eclipsing the 30-second spot. Furthermore, the ability of service providers to disaggregate a show into component parts would lead to stronger programming recommendation and search options, which in turn would allow for more accurate user preference settings and personalized marketing opportunities. Startups like Vobile, Zeitera, and Civolution are developing software for smart TV platforms, but there is currently no large-scale commercial deployment of ACR by any of the major television manufacturers. That means if you want to figure what song is playing during a basketball shoe commercial, the one option you don’t have is simply pressing a button on the remote and pulling up the song title and artist—not to mention info about the shoe. People spend more than 60 percent of their leisure time watching television. Changing that passive viewing experience into an active one (i.e., engaging, searching, shopping) is like finding a gold mine on top of an oil deposit. While we’ve been saying this for years, temporal metadata is in fact the key we’ve been looking for to unlock this potential. Freddie Laker is the founder and CEO of Guide — a technology startup that turns online news and blogs into video. Previously he was the VP of Strategy at one of the world’s largest digital agencies SapientNitro.

Loading Civolution collaborators
Loading Civolution collaborators