video

Belfast boxers show support to youth club's NHS appreciation video

Paddy Barnes, Sean McComb, Tyrone McKenna and Padraig McCrory are all seen in the video




video

KCMP (89.3 The Current)/Minneapolis’ Jim McGuinn And Glassnote’s Nick Petropoulos Collaborate On Videos To Support Charity

While sheltering-at-home in UPSTATE NEW YORK, GLASSNOTE Head Of Promotion NICK PETROPOULOS sent KCMP (89.3 THE CURRENT)/MINNEAPOLIS PD JIM MCGUINN a song of guitar riffs and an email about … more




video

Top 5 Video Editing Software

There was a time when there was not a huge demand for video editing software. But over time, video editing software has become one of the highly used tools of modern society. One of the most common examples where video editing software is highly required is for making Vlogs. Apart from the Vlogs, video making...





video

Facebook Live Streaming and Audio/Video Hosting connected to Auphonic

Facebook is not only a social media giant, the company also provides valuable tools for broadcasting. Today we release a connection to Facebook, which allows to use the Facebook tools for video/audio production and publishing within Auphonic and our connected services.

The following workflows are possible with Facebook and Auphonic:
  • Use Facebook for live streaming, then import, process and distribute the audio/video with Auphonic.
  • Post your Auphonic audio or video productions directly to the news feed of your Facebook Page or User.
  • Use Facebook as a general media hosting service and share the link or embed the audio/video on any webpage (also visible to non-Facebook users).

Connect to Facebook

First you have to connect to a Facebook account at our External Services Page, click on the "Facebook" button.

Select if you want to connect to your personal Facebook User or to a Facebook Page:

It is always possible to remove or edit the connection in your Facebook Settings (Tab Business Integrations).

Import (Live) Videos from Facebook to Auphonic

Facebook Live is an easy (and free) way to stream live videos:

We implemented an interface to use Facebook as an Incoming External Service. Please select a (live or non-live) video from your Facebook Page/User as the source of a production and then process it with Auphonic:

This workflow allows you to use Facebook for live streaming, import and process the audio/video with Auphonic, then publish a podcast and video version of your live video to any of our connected services.

Export from Auphonic to Facebook

Similar to Youtube, it is possible to use Facebook for media file hosting.
Please add your Facebook Page/User as an External Service in your Productions or Presets to upload the Auphonic results directly to Facebook:

Options for the Facebook export:
  • Distribution Settings
    • Post to News Feed: The exported video is posted directly to your news feed / timeline.
    • Exclude from News Feed: The exported video is visible in the videos tab of your Facebook Page/User (see for example Auphonic's video tab), but it is not posted to your news feed (you can do that later if you want).
    • Secret: Only you can see the exported video, it is not shown in the Facebook video tab and it is not posted to your news feed (you can do that later if you want).
  • Embeddable
    Choose if the exported video should be embeddable in third-party websites.

It is always possible to change the distribution/privacy and embeddable options later directly on Facebook. For example, you can export a video to Facebook as Secret and publish it to your news feed whenever you want.


If your production is audio-only, we automatically generate a video track from the Cover Image and (possible) Chapter Images.
Alternatively you can select an Audiogram Output File, if you want to add an Audiogram (audio waveform visualization) to your Facebook video - for details please see Auphonic Audiogram Generator.

Auphonic Title and Description metadata fields are exported to Facebook as well.
If you add Speech Recognition to your production, we create an SRT file with the speech recognition results and add it to your Facebook video as captions.
See the example below.

Facebook Video Hosting Example with Audiogram and Automatic Captions

Facebook can be used as a general video hosting service: even if you export videos as Secret, you will get a direct link to the video which can be shared or embedded in any third-party websites. Users without a Facebook account are also able to view these videos.

In the example below, we automatically generate an Audiogram Video for an audio-only production, use our integrated Speech Recognition system to create captions and export the video as Secret to Facebook.
Afterwards it can be embedded directly into this blog post (enable Captions if they don't show up per default) - for details please see How to embed a video:

It is also possible to just use the generated result URL from Auphonic to share the link to your video (also visible to non-Facebook users):
https://www.facebook.com/auphonic/videos/1687244844638091/

Important Note:
Facebook needs some time to process an exported video (up to a few minutes) and the direct video link won't work before the processing is finished - please try again a bit later!
On Facebook Pages, you can see the processing progress in your Video Library.

Conclusion

Facebook has many broadcasting tools to offer and is a perfect addition to Auphonic.
Both systems and our other external services can be used to create automated processing and publishing workflows. Furthermore, the export and import to/from Facebook is also fully supported in the Auphonic API.

Please contact us if you have any questions or further ideas!




video

20+ Best WordPress Video Themes for 2020

If you’re a video producer or vlogger looking to set up your own video website to showcase your content, you’ll most likely need one that reflects your own unique style. You’ll need to think about the gallery options you’d want, color schemes, customizations, and the type of business you’re running. You should also consider the different technology you’ll need to […]




video

Thunderbolting Your Video Card

When I wrote about The Golden Age of x86 Gaming, I implied that, in the future, it might be an interesting, albeit expensive, idea to upgrade your video card via an external Thunderbolt 3 enclosure.

I'm here to report that the future is now.

Yes, that's right, I paid $500




video

Video Tutorial: How to Turn Anything into Gold in Photoshop

In today’s Adobe Photoshop tutorial I’m going to show you how to turn anything into gold using this simple combination of Photoshop filters and tools. The effect smooths out the details of a regular image and adds an array of shiny reflections to mimic the appearance of a polished metal statue. A gradient overlay gives […]

The post Video Tutorial: How to Turn Anything into Gold in Photoshop appeared first on Spoon Graphics.




video

Video Tutorial: How to Create an Embroidered Patch Design in Illustrator

In today’s Adobe Illustrator tutorial I’m going to take you through the process of creating a colourful embroidered patch, based on the kinds of designs associated with National Parks. The artwork will incorporate a landscape scene at sunset, which helps to keep the design simple with a silhouette graphic and a warm colour palette. Stick […]

The post Video Tutorial: How to Create an Embroidered Patch Design in Illustrator appeared first on Spoon Graphics.




video

Video Tutorial: Vintage Letterpress Poster Design in Photoshop

In today’s Adobe Photoshop video tutorial I’m going to take you through my process of creating a vintage style advertisement poster with letterpress print effects. We’ll start by laying out the design with a selection of fonts inspired by the era of wood type, along with some hand-drawn graphic elements using a limited 3-colour palette. […]

The post Video Tutorial: Vintage Letterpress Poster Design in Photoshop appeared first on Spoon Graphics.




video

The Best Free Zoom Backgrounds to Make Your Video Conferencing More Fun

If you’re a remote worker, you may have plenty of experience with video conferencing as a way to communicate with clients, team members, or other colleagues. But with millions of additional...

Click through to read the rest of the story on the Vandelay Design Blog.




video

Star Wars Size Comparison Video

The galaxy far far away has items both big and small. The Star Wars Size Comparison Video created by MetaBallStudios brings droids, people and planets together from the Star Wars movies (episode I to VIII, Rogue One and Solo). See how your favorites size up against each other.

Comparison of many things from the Star Wars movies. Only movies from episode I to VIII, Rogue One and Solo. Obviously not everything appears, only the most representative.

Providing scale and context to your audience is one of the key tenets of data visualization, and this video does a fantastic job of giving you the context of the size of everything in the Star Wars universe.

Found on Gizmodo.com.




video

10 Things To Do Before Any Video Interview

We’re all working from home, and that include job interviews, news interviews, class lectures, webinars, presentations to customers and even just business meetings. The 10 Things to Do Before Any Video Interview infographic from Kickresume is a great last-minute checklist before you turn on your webcam!

In the end, you can take this infographic as a checklist. You can use it to prepare for your job interview or any other video conference call.

And, oh boy, are we going to make many more of those. Sure, it took a global pandemic for companies to recognize the value of working from home but now there’s no going back. Video conference calls are here to stay. (I personally hate it but even I should probably get used to it. Damn.)

Anyway, good luck at your job interview!

I would have preferred more visual elements, but I like that this is a tightly focused infographic with a clear, useful message to a broad audience. This is one of the best uses for an infographic: an informative topic, related to the industry of the publishing company, with a popular, trending topic. This design checks all the boxes.

Designers have to remember that the infographic image file will often be shared by itself, so it always helps to include a few more thins in the footer:

  • The Infographic Landing Page URL (not just the company home page). This will help readers find the full infographic and the article that went along with it. Don’t make people search for it on your website.

  • A copyright or Creative Commons statement is always a good idea when you publishing an infographic




video

Video Shows a Man Screaming 'Fake Pandemic' at a Florida Officer

A nearly two-minute, profanity-laced tirade at a code officer at a Miami Beach grocery store is the latest example of mounting tensions in the US over wearing masks to stem the spread of the coronavirus.




video

Wix Video — a great marketing tool for any website.

Increases time on page and boosts engagement with your site Thanks to the ever-increasing internet speeds, videos are in high demand. Right now, video is everywhere on social media, websites, and apps. We are watching them on all our screens, desktops, tablets, phones and smart TVs. It is expected a growth in video content up …

Wix Video — a great marketing tool for any website. Read More »




video

Watching the World Go By: Representation Learning from Unlabeled Videos. (arXiv:2003.07990v2 [cs.CV] UPDATED)

Recent single image unsupervised representation learning techniques show remarkable success on a variety of tasks. The basic principle in these works is instance discrimination: learning to differentiate between two augmented versions of the same image and a large batch of unrelated images. Networks learn to ignore the augmentation noise and extract semantically meaningful representations. Prior work uses artificial data augmentation techniques such as cropping, and color jitter which can only affect the image in superficial ways and are not aligned with how objects actually change e.g. occlusion, deformation, viewpoint change. In this paper, we argue that videos offer this natural augmentation for free. Videos can provide entirely new views of objects, show deformation, and even connect semantically similar but visually distinct concepts. We propose Video Noise Contrastive Estimation, a method for using unlabeled video to learn strong, transferable single image representations. We demonstrate improvements over recent unsupervised single image techniques, as well as over fully supervised ImageNet pretraining, across a variety of temporal and non-temporal tasks. Code and the Random Related Video Views dataset are available at https://www.github.com/danielgordon10/vince




video

Dynamic Face Video Segmentation via Reinforcement Learning. (arXiv:1907.01296v3 [cs.CV] UPDATED)

For real-time semantic video segmentation, most recent works utilised a dynamic framework with a key scheduler to make online key/non-key decisions. Some works used a fixed key scheduling policy, while others proposed adaptive key scheduling methods based on heuristic strategies, both of which may lead to suboptimal global performance. To overcome this limitation, we model the online key decision process in dynamic video segmentation as a deep reinforcement learning problem and learn an efficient and effective scheduling policy from expert information about decision history and from the process of maximising global return. Moreover, we study the application of dynamic video segmentation on face videos, a field that has not been investigated before. By evaluating on the 300VW dataset, we show that the performance of our reinforcement key scheduler outperforms that of various baselines in terms of both effective key selections and running speed. Further results on the Cityscapes dataset demonstrate that our proposed method can also generalise to other scenarios. To the best of our knowledge, this is the first work to use reinforcement learning for online key-frame decision in dynamic video segmentation, and also the first work on its application on face videos.




video

Kunster -- AR Art Video Maker -- Real time video neural style transfer on mobile devices. (arXiv:2005.03415v1 [cs.CV])

Neural style transfer is a well-known branch of deep learning research, with many interesting works and two major drawbacks. Most of the works in the field are hard to use by non-expert users and substantial hardware resources are required. In this work, we present a solution to both of these problems. We have applied neural style transfer to real-time video (over 25 frames per second), which is capable of running on mobile devices. We also investigate the works on achieving temporal coherence and present the idea of fine-tuning, already trained models, to achieve stable video. What is more, we also analyze the impact of the common deep neural network architecture on the performance of mobile devices with regard to number of layers and filters present. In the experiment section we present the results of our work with respect to the iOS devices and discuss the problems present in current Android devices as well as future possibilities. At the end we present the qualitative results of stylization and quantitative results of performance tested on the iPhone 11 Pro and iPhone 6s. The presented work is incorporated in Kunster - AR Art Video Maker application available in the Apple's App Store.




video

Accessibility in 360-degree video players. (arXiv:2005.03373v1 [cs.MM])

Any media experience must be fully inclusive and accessible to all users regardless of their ability. With the current trend towards immersive experiences, such as Virtual Reality (VR) and 360-degree video, it becomes key that these environments are adapted to be fully accessible. However, until recently the focus has been mostly on adapting the existing techniques to fit immersive displays, rather than considering new approaches for accessibility designed specifically for these increasingly relevant media experiences. This paper surveys a wide range of 360-degree video players and examines the features they include for dealing with accessibility, such as Subtitles, Audio Description, Sign Language, User Interfaces, and other interaction features, like voice control and support for multi-screen scenarios. These features have been chosen based on guidelines from standardization contributions, like in the World Wide Web Consortium (W3C) and the International Communication Union (ITU), and from research contributions for making 360-degree video consumption experiences accessible. The in-depth analysis has been part of a research effort towards the development of a fully inclusive and accessible 360-degree video player. The paper concludes by discussing how the newly developed player has gone above and beyond the existing solutions and guidelines, by providing accessibility features that meet the expectations for a widely used immersive medium, like 360-degree video.




video

Vid2Curve: Simultaneously Camera Motion Estimation and Thin Structure Reconstruction from an RGB Video. (arXiv:2005.03372v1 [cs.GR])

Thin structures, such as wire-frame sculptures, fences, cables, power lines, and tree branches, are common in the real world.

It is extremely challenging to acquire their 3D digital models using traditional image-based or depth-based reconstruction methods because thin structures often lack distinct point features and have severe self-occlusion.

We propose the first approach that simultaneously estimates camera motion and reconstructs the geometry of complex 3D thin structures in high quality from a color video captured by a handheld camera.

Specifically, we present a new curve-based approach to estimate accurate camera poses by establishing correspondences between featureless thin objects in the foreground in consecutive video frames, without requiring visual texture in the background scene to lock on.

Enabled by this effective curve-based camera pose estimation strategy, we develop an iterative optimization method with tailored measures on geometry, topology as well as self-occlusion handling for reconstructing 3D thin structures.

Extensive validations on a variety of thin structures show that our method achieves accurate camera pose estimation and faithful reconstruction of 3D thin structures with complex shape and topology at a level that has not been attained by other existing reconstruction methods.




video

Self-Supervised Human Depth Estimation from Monocular Videos. (arXiv:2005.03358v1 [cs.CV])

Previous methods on estimating detailed human depth often require supervised training with `ground truth' depth data. This paper presents a self-supervised method that can be trained on YouTube videos without known depth, which makes training data collection simple and improves the generalization of the learned network. The self-supervised learning is achieved by minimizing a photo-consistency loss, which is evaluated between a video frame and its neighboring frames warped according to the estimated depth and the 3D non-rigid motion of the human body. To solve this non-rigid motion, we first estimate a rough SMPL model at each video frame and compute the non-rigid body motion accordingly, which enables self-supervised learning on estimating the shape details. Experiments demonstrate that our method enjoys better generalization and performs much better on data in the wild.




video

DramaQA: Character-Centered Video Story Understanding with Hierarchical QA. (arXiv:2005.03356v1 [cs.CL])

Despite recent progress on computer vision and natural language processing, developing video understanding intelligence is still hard to achieve due to the intrinsic difficulty of story in video. Moreover, there is not a theoretical metric for evaluating the degree of video understanding. In this paper, we propose a novel video question answering (Video QA) task, DramaQA, for a comprehensive understanding of the video story. The DramaQA focused on two perspectives: 1) hierarchical QAs as an evaluation metric based on the cognitive developmental stages of human intelligence. 2) character-centered video annotations to model local coherence of the story. Our dataset is built upon the TV drama "Another Miss Oh" and it contains 16,191 QA pairs from 23,928 various length video clips, with each QA pair belonging to one of four difficulty levels. We provide 217,308 annotated images with rich character-centered annotations, including visual bounding boxes, behaviors, and emotions of main characters, and coreference resolved scripts. Additionally, we provide analyses of the dataset as well as Dual Matching Multistream model which effectively learns character-centered representations of video to answer questions about the video. We are planning to release our dataset and model publicly for research purposes and expect that our work will provide a new perspective on video story understanding research.




video

What comprises a good talking-head video generation?: A Survey and Benchmark. (arXiv:2005.03201v1 [cs.CV])

Over the years, performance evaluation has become essential in computer vision, enabling tangible progress in many sub-fields. While talking-head video generation has become an emerging research topic, existing evaluations on this topic present many limitations. For example, most approaches use human subjects (e.g., via Amazon MTurk) to evaluate their research claims directly. This subjective evaluation is cumbersome, unreproducible, and may impend the evolution of new research. In this work, we present a carefully-designed benchmark for evaluating talking-head video generation with standardized dataset pre-processing strategies. As for evaluation, we either propose new metrics or select the most appropriate ones to evaluate results in what we consider as desired properties for a good talking-head video, namely, identity preserving, lip synchronization, high video quality, and natural-spontaneous motion. By conducting a thoughtful analysis across several state-of-the-art talking-head generation approaches, we aim to uncover the merits and drawbacks of current methods and point out promising directions for future work. All the evaluation code is available at: https://github.com/lelechen63/talking-head-generation-survey.




video

Can harnessing the psychological power of video games make you healthier?

Growing up, Luke Parker played sports.…




video

Dongle device with video encoding and methods for use therewith

A universal serial bus (USB) dongle device includes a USB interface that receives selection data from a host device that indicates a selection of a first video format from a plurality of available formats. The USB interface also receives an input video signal from the host device in the first video format and a power signal from the host device. An encoding module generates a processed video signal in a second video format based on the input video signal, wherein the first video format differs from the second video format. The USB interface transfers the processed video signal to the host device.




video

Video player instance prioritization

A video player instance may be prioritized and decoding and rendering resources may be assigned to the video player instance accordingly. A video player instance may request use of a resource combination. Based on a determined priority a resource combination may be assigned to the video player instance. A resource combination may be reassigned to another video player instance upon detection that the previously assigned resource combination is no longer actively in use.




video

System and method for supporting video processing load balancing for user account management in a computing environment

A system and method can support user account management in a computing environment. The computing environment can include a video encoding pool to support load balancing and a managing server, such as a privileged account manager server. The video encoding pool includes a set of nodes that are able to perform one or more video processing tasks for another node. Furthermore, the managing server can receive a request from a managed node in the computing environment for delegating a video processing task, and can select one or more nodes from the video encoding pool to load babalance and to perform the video processing task.




video

Apparatus for recording and quickly retrieving video signal parts on a magnetic tape

In an apparatus for recording and quickly retrieving video signal parts on a magnetic tape, during recording information about the local position of each video signal part is automatically stored in a memory associated with the apparatus, which is designed for storing identifying information for a large number of magnetic tape cassettes. The retrieval of each video signal part on each of the cassettes can be effected substantially without delay in the quick rewind mode of operation.




video

Arrangement for automatically switching a videorecorder on and off in the absence of a code signal but in presence of a FBAS signal

The disclosed device enables the recording of television broadcasts which are preprogrammed in a memory. The presence of data lines of the television signal in combination with the presence of a color television signal is checked. When the data lines stop, the video recorder is switched on in real time by a clock time signal.




video

Audio-video multi-participant conference systems using PSTN and internet networks

A multi-participant conference system and method is described. The multi-participant system includes a PSTN client, at least one remote client and a first participant client. The PSTN client communicates audio data and the remote clients communicate audio-video data. The first participant client includes a voice over IP (VoIP) encoder, a VoIP decoder, a first audio mixer, and a second audio mixer. The VoIP encoder compresses audio data transported to the PSTN client. The VoIP decoder then decodes audio data from the PSTN client. The first audio mixer mixes the decoded audio data from the PSTN client with the audio-video data from the first participant into a first mixed audio-video data stream transmitted to the remote client. The second audio mixer mixes the audio-video data stream from the first participant with the audio-video data stream from each remote client into a second mixed audio transmitted to the PSTN client.




video

Video jukebox apparatus and a method of playing music and music videos using a video jukebox apparatus

A digital jukebox (14) allows for playback of a first offering and a second offering. The contents of each offering are individually licensed for public performance at a particular location where the jukebox is found. The jukebox (14) displays advertisements that are selected in response to user interaction with the jukebox or a number of other factors. The jukebox (14) features a screen (18 and 20) that allows user to interact with the jukebox to select offerings, but also to respond to advertising. Jukebox can function in cooperation with a server (12), but in the alternate, can function as an independent and stand-alone device when connection (16) to server (12) is not available.




video

Placing sponsored-content based on images in video content

Sponsored-content may be placed based on images in video content. A first image in a frame of a video content item is identified. The first image is matched with a second stored image. A sponsored-content item to be presented is selected based on an association between the second stored image and the sponsored-content item.




video

Adlite rich media solutions without presentation requiring use of a video player

The present invention provides techniques relating to rich media advertising. Techniques are provided in which an advertiser-provided image-based component of an advertisement creative is matched with an advertiser-provided audio component of the advertisement creative. A rich media advertisement may be served that includes the image-based component and a synchronously presented audio component. Presentation of the rich media advertisement on a user computer may not require downloading or utilization of a video player.




video

Apparatus, systems and methods for a video thumbnail electronic program guide

Video thumbnail electronic program guide (EPG) systems and methods are operable to include a video thumbnail. An exemplary embodiment receives a media content stream at a media device; picks a plurality of still image video frames from the received media content stream, wherein each still image video frame has information that is sufficient to construct the still image video frame; generate a plurality of still image video frame thumbnails, wherein each of the still image video frame thumbnails correspond to one of the still image video frames; generate a video thumbnail from the plurality of still image video frame thumbnails; and incorporate the video thumbnail with at least one program descriptor and a channel identifier associated with the media content stream into the video thumbnail EPG.




video

Inflight entertainment system with selectively preloaded seat end video caches

An inflight entertainment (IFE) system preloads from head end equipment onto seat end video caches subsets of prerecorded video entertainment programs from a library of prerecorded video entertainment programs stored on the head end equipment. Preloading is done independent of play requests made by passengers using the IFE system. The selected subsets are selected using selection metrics such as program popularity, passenger demographics and/or passenger preferences. The same or a different subset may be selected for different passengers. As a result of the selective preloading of the seat end video caches, if the head end equipment or the distribution system becomes inoperable during the flight, the IFE system is able to continue to deliver a limited offering of popular, demographically indicated and/or passenger preferred video entertainment from the seat end video caches, without requiring a large multiplier in storage capacity or loading time.




video

Methods, devices, and computer program products for providing instant messaging in conjunction with an audiovisual, video, or audio program

Methods, devices, and computer program products for providing instant messaging in conjunction with an audiovisual, video, or audio program are provided. The methods include providing an audiovisual, video, or audio program to a user. Viewer/listener input is received requesting activation of a program-based instant messaging function. A viewer/listener identifier corresponding to the viewer/listener is associated with a program identifier that uniquely identifies the audiovisual, video, or audio program being provided to the user to thereby generate a program viewer/listener record. The program viewer/listener record is transmitted to an electronic database. A list of other users who are viewing or listening to the program in addition to the viewer/listener is acquired from the electronic database. The list of other users is transmitted to the viewer/listener.




video

Method and apparatus for extracting advertisement keywords in association with situations of video scenes

A method and apparatus for extracting advertisement keywords in association with situations of scenes of video include: establishing a knowledge database including a classification hierarchy for classifying situations of scenes of video and an advertisement keyword list, segmenting a video script corresponding to a received video in units of scenes, and determining a situation corresponding to each scene with reference to the knowledge database, and extracting an advertisement keyword corresponding to the situation of a scene of the received video with reference to the knowledge database.




video

System and method for managing, converting and displaying video content on a video-on-demand platform, including ads used for drill-down navigation and consumer-generated classified ads

A video-on-demand (VOD) content delivery system has a VOD Application Server which manages a database of templates ordered in a hierarchy for presentation of video content elements of different selected types categorized in hierarchical order. The templates include those for higher-order displays which have one or more links to lower-order displays of specific content. The VOD Application Server, in response to viewer request, displays a high-order templatized display, and in response to viewer selection of a link, displays the lower-order display of specific content. The hierarchical templatized displays enable viewers to navigate to an end subject of interest while having a unique visual experience of moving through a series of displays to the end subject of interest.




video

System for adding or updating video content from internet sources to existing video-on-demand application of a digital TV services provider system

A video-on-demand (VOD) content delivery system has a VOD Application Server which manages a database of templates ordered in a hierarchy for presentation of video content elements of different selected types categorized in hierarchical order. The templates include those for higher-order displays which have one or more links to lower-order displays of specific content. The VOD Application Server, in response to viewer request, displays a high-order templatized display, and in response to viewer selection of a link, displays the lower-order display of specific content. The hierarchical templatized displays enable viewers to navigate to an end subject of interest while having a unique visual experience of moving through a series of displays to the end subject of interest. For example, the higher-order display may be a product ad and the lower-order display may be an ad for a local retailer of the product. Similarly, a viewer can navigate from national product to local product ad, or classified ad category to specific classified ad, or bulletin board topic category to specific posting. In another embodiment, the VOD content delivery system is used to deliver consumer-generated classified ads on TV. A web-based Content Management System receives consumer-generated content uploaded online in industry-standard file formats with metadata for title and topical area, and automatically converts it into video data format compatible with the VOD content delivery system indexed by title and topical area. A User Interface for the system delivers listings data to the viewer's TV indexed by title and topical area, and displays a requested classified ad in response to viewer selection.




video

Video game characters having evolving traits

A server-based video game system maintains a number of video game characters having computer-simulated genetic (“digenetic”) structures that prescribe a number of physical and cognitive performance traits and characteristics for the video game characters. The system allows end users to establish remote online access to the game characters (via, e.g., the Internet). The results of competitions and training activities are based upon the game characters' digenetics, the conditions of the game environment, and the game characters' current levels of physical and cognitive development. The game characters' performance capabilities and cognition are updated continuously in response to the results of competitions and training activities. Competition and training results can be processed by the game servers and transmitted to the end user presentation devices for graphics rendering. In this manner, the video game system need not be burdened by network latency and other delays. The game system also supports game character breeding; game character evolution based upon the digenetics; and game character buying, trading, selling, and collecting.




video

Video processing and signal routing apparatus for providing picture in a picture capabilities on an electronic gaming machine

A gaming system used in a wager-based electronic gaming machine is described. The gaming system is configured to provide picture in a picture capabilities on the electronic gaming machine. In one embodiment, the gaming system can include a first gaming device and a second gaming device where the first gaming device controls the second gaming device. The first gaming device can be configured to receive data and/or communicate with an electronic gaming machine controller, a value input device and value output device. The second gaming device can be configured to receive touchscreen data from a touchscreen display and first video data from the first gaming device and second video data from the EGM controller. Under control of the first gaming device, the first video data and second video data can be output in various sizes and locations on the touchscreen display.




video

Video processing apparatus, method of adding time code, and methode of preparing editing list

A video processing apparatus is provided. The video processing apparatus includes: an inputter inputting video signals of a plurality of systems, and a processor generating processed video signals by performing switching on the video signals of two or more systems input into the inputter. Further, the video processing apparatus includes: a time code generator generating a time code, and a time code adder adding the time code to the input video signals and the generated video signals respectively, outputs the video signals with the time code to be recorded in a recording medium.




video

Synchronous fusion of video and numerical data

According to typical practice of the present invention, temporal identifiers are simultaneously associated with (i) video and (ii) numerical data while these data are contemporaneously collected by, respectively, (i) a video camera filming an event and (ii) a data acquisition system acquiring numerical data from sensor(s) obtaining sensory information relating to the event. Various modes of inventive practice provide for time codes and/or markers as the temporal identifiers. The video and the numerical data are each converted to a digital form that furthers their mutual compatibility, the video to a compressed/encoded video file, the numerical data to an Adobe XMP data file. The compressed/encoded video file and the XMP data file are merged whereby the temporal identifiers are aligned with each other. The merged video file has numerical data embedded therein and is displayable so that the video and the numerical data are synchronized in comportment with actual real-time occurrence.




video

Data creation device and playback device for video picture in video stream

A data creation device performing compression encoding on first frame images showing a view from a first viewpoint and second frame images showing a view from a second viewpoint generates a stream in an MPEG-2 format, and base-view/dependent-view video streams in a format conforming to an MPEG-4 MVC format. The stream in the MPEG-2 format is generated by performing compression encoding on the first frame images. The base-view video stream is a stream of dummy data having the same number of frames as and a smaller data amount than the stream in the MPEG-2 format. The dependent-view video stream is obtained by performing compression encoding on each frame of the second frame images, with reference to a frame of the stream in the MPEG-2 format to be presented at the same time as a frame of the base-view video stream corresponding to the frame of the second frame images.




video

Video frame still image sequences

An electronic device may determine to present a video frame still image sequence version of a video instead of the video. The electronic device may derive a plurality of still images from the video. The electronic device may generate the video frame still image sequence by associating the plurality of still images. The electronic device may present the video frame still image sequence. The video frame still image sequence may be displayed according to timing information to resemble play of the video. In some cases, audio may also be derived from the video. In such cases, display of the video frame still image sequence may be performed along with play of the audio.




video

Method and apparatus for providing additional information of video using visible light communication

A method and apparatus for providing additional information included in a video displayed on a display device using visible light communication (VLC). A data packet including video data and additional information for an object included in the video data is received. The video data is extracted from the data packet, and the video data is decoded. The additional information from the data packet is extracted, and the additional information decoded. The decoded video data is output through the display device, and at the same time, the additional information is transmitted for a particular object included in a video based on a VLC protocol using a light emitting device prepared in the display device. The additional information providing apparatus includes an image sensor module, a display module, a visible light receiving module, an additional information manager and a controller.




video

Method of managing multiple wireless video traffic and electronic device thereof

A method and a playback control device are provided. The method, performed by the playback control device, includes: receiving a first request to playback a first data of a first wireless multimedia data type having a first priority; and playing back the first data if no other data of a wireless multimedia data type having a priority higher than the first priority is received.




video

Systems and methods for generation of composite video from multiple asynchronously recorded input streams

Systems and methods are provided for generating a composite video based on a plurality of asynchronously recorded input video streams. A plurality of segments of the input video streams are identified. A number of the input video streams that were recording during the particular segment are determined. A video display configuration for the particular segment is determined based on the number of video streams that were recording, where the video display configuration includes a display sub-region for each of the number of video streams that was recording. A composite video is generated, where the composite video includes a portion of video associated with each of the segments, where the composite video portion associated with the particular segment is formatted according to the video display configuration and displays the video streams that were recording during the particular segment in the display sub-regions of the video display configuration.




video

Measuring apparatus for measuring stereo video format and associated method

A measuring apparatus for measuring a stereo video format includes an active space measuring circuit and a decision circuit. The active space measuring circuit is utilized for determining a position of an active space of a frame packing to generate an active space measuring result according to pixels values of a plurality of scan lines of the frame packing. The decision circuit is coupled to the active space measuring circuit, and is utilized for determining the stereo video format according to at least the active space measuring result.




video

Automatic detection, removal, replacement and tagging of flash frames in a video

A method for automatically detecting, eliminating and replacing flash frames in digital video utilizes the detected flash frames to categorize and tag the surrounding frames as a relevant area of the digital video. The flash frame is detected when acquiring digital video during capture, the flash frame is replaced with a newly-constructed frame that is interpolated based upon surrounding frames and then, using the detected flash as the timestamp, the frame is tagged.