r

Decoding apparatus, decoding method, encoding apparatus, encoding method, and editing apparatus

A decoding apparatus (10) is disclosed which includes: a storing means (11) for storing encoded audio signals including multi-channel audio signals; a transforming means (40) for transforming the encoded audio signals to generate transform block-based audio signals in a time domain; a window processing means (41) for multiplying the transform block-based audio signals by a product of a mixture ratio of the audio signals and a first window function, the product being a second window function; a synthesizing means (43) for overlapping the multiplied transform block-based audio signals to synthesize audio signals of respective channels; and a mixing means (14) for mixing audio signals of the respective channels between the channels to generate a downmixed audio signal. Furthermore, an encoding apparatus is also disclosed which downmixes the multi-channel audio signals, encodes the downmixed audio signals, and generates the encoded, downmixed audio signals.




r

Apparatus for processing an audio signal and method thereof

An apparatus for processing an audio signal and method thereof are disclosed. The present invention includes receiving a downmix signal and side information; extracting control restriction information from the side information; receiving control information for controlling gain or panning at least one object signal; generating at least one of first multi-channel information and first downmix processing information based on the control information and object information, without using the control restriction information; and, generating an output signal by applying the at least one of the first multichannel information and the first downmix processing information to the downmix signal, wherein the control restriction information relates to a parameter indicating limiting degree of the control information.




r

Sparse audio

A method comprising: sampling received audio at a first rate to produce a first audio signal; transforming the first audio signal into a sparse domain to produce a sparse audio signal; re-sampling of the sparse audio signal to produce a re-sampled sparse audio signal; and providing the re-sampled sparse audio signal, wherein bandwidth required for accurate audio reproduction is removed but bandwidth required for spatial audio encoding is retained AND/OR a method comprising: receiving a first sparse audio signal for a first channel; receiving a second sparse audio signal for a second channel; and processing the first sparse audio signal and the second sparse audio signal to produce one or more inter-channel spatial audio parameters.




r

Audio controlling apparatus, audio correction apparatus, and audio correction method

According to one embodiment, an audio controlling apparatus includes a first receiver configured to receive audio signal, a second receiver configured to receive environmental sound, a temporary gain calculator configured to calculate temporary gain based on environmental sound received by second receiver, a sound type determination module configured to determine sound type of main component of audio signal received by first receiver, and a gain controller configured to stabilize temporary gain that is calculated by temporary gain calculator and set gain, when it is determined that sound type of main component of audio signal received by first receiver is music.




r

Methods and apparatus to generate and use content-aware watermarks

Methods and apparatus to generate and use content-aware watermarks are disclosed herein. In a disclosed example method, media composition data is received and at least one word present in an audio track of the media composition data is selected. The word is then located in a watermark.




r

Systems and methods for identifying and suggesting emoticons

Computer-implemented systems and methods are provided for suggesting emoticons for insertion into text based on an analysis of sentiment in the text. An example method includes: determining a first sentiment of text in a text field; selecting first text from the text field in proximity to a current position of an input cursor in the text field; identifying one or more candidate emoticons wherein each candidate emoticon is associated with a respective score indicating relevance to the first text and the first sentiment based on, at least, historical user selections of emoticons for insertion in proximity to respective second text having a respective second sentiment; providing one or more candidate emoticons having respective highest scores for user selection; and receiving user selection of one or more of the provided emoticons and inserting the selected emoticons into the text field at the current position of the input cursor.




r

Extracting information from unstructured text using generalized extraction patterns

Methods, systems, and apparatus, including computer program products, for extracting information from unstructured text. Fact pairs are used to extract basic patterns from a body of text. Patterns are generalized by replacing words with classes of similar words. Generalized patterns are used to extract further fact pairs from the body of text. The process can begin with fact pairs, basic patterns, or generalized patterns.




r

Manner of pronunciation-influenced search results

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating search results. In one aspect, a method includes obtaining a transcription of a voice query, and data that identifies an accent of the voice query, submitting the transcription and the data that identifies the accent of the voice query to a search engine to generate one or more accent-influenced results of the voice query, and providing the accent-influenced results to a client device for display.




r

Adaptive grouping of parameters for enhanced coding efficiency

The present invention is based on the finding that parameters including: a first set of parameters of a representation of a first portion of an original signal and a second set of parameters of a representation of a second portion of the original signal can be efficiently encoded when the parameters are arranged in a first sequence of tuples and a second sequence of tuples. The first sequence of tuples includes tuples of parameters having two parameters from a single portion of the original signal and the second sequence of tuples includes tuples of parameters having one parameter from the first portion and one parameter from the second portion of the original signal. A bit estimator estimates the number of necessary bits to encode the first and the second sequence of tuples. Only the sequence of tuples, which results in the lower number of bits, is encoded.




r

Method and apparatus for processing audio frames to transition between different codecs

A method (700, 800) and apparatus (100, 200) processes audio frames to transition between different codecs. The method can include producing (720), using a first coding method, a first frame of coded output audio samples by coding a first audio frame in a sequence of frames. The method can include forming (730) an overlap-add portion of the first frame using the first coding method. The method can include generating (740) a combination first frame of coded audio samples based on combining the first frame of coded output audio samples with the overlap-add portion of the first frame. The method can include initializing (760) a state of a second coding method based on the combination first frame of coded audio samples. The method can include constructing (770) an output signal based on the initialized state of the second coding method.




r

Encoder, decoder and methods for encoding and decoding data segments representing a time-domain data stream

An apparatus for decoding data segments representing a time-domain data stream, a data segment being encoded in the time domain or in the frequency domain, a data segment being encoded in the frequency domain having successive blocks of data representing successive and overlapping blocks of time-domain data samples. The apparatus includes a time-domain decoder for decoding a data segment being encoded in the time domain and a processor for processing the data segment being encoded in the frequency domain and output data of the time-domain decoder to obtain overlapping time-domain data blocks. The apparatus further includes an overlap/add-combiner for combining the overlapping time-domain data blocks to obtain a decoded data segment of the time-domain data stream.




r

Audio encoder, audio decoder, methods for encoding and decoding an audio signal, and a computer program

An encoder for providing an audio stream on the basis of a transform-domain representation of an input audio signal includes a quantization error calculator configured to determine a multi-band quantization error over a plurality of frequency bands of the input audio signal for which separate band gain information is available. The encoder also includes an audio stream provider for providing the audio stream such that the audio stream includes information describing an audio content of the frequency bands and information describing the multi-band quantization error. A decoder for providing a decoded representation of an audio signal on the basis of an encoded audio stream representing spectral components of frequency bands of the audio signal includes a noise filler for introducing noise into spectral components of a plurality of frequency bands to which separate frequency band gain information is associated on the basis of a common multi-band noise intensity value.




r

Thought recollection and speech assistance device

Some embodiments of the inventive subject matter include a method for detecting speech loss and supplying appropriate recollection data to the user. Such embodiments include detecting a speech stream from a user, converting the speech stream to text, storing the text, detecting an interruption to the speech stream, wherein the interruption to the speech stream indicates speech loss by the user, searching a catalog using the text as a search parameter to find relevant catalog data and, presenting the relevant catalog data to remind the user about the speech stream.




r

System and methods for matching an utterance to a template hierarchy

A system and methods for matching at least one word of an utterance against a set of template hierarchies to select the best matching template or set of templates corresponding to the utterance. Certain embodiments of the system and methods determines at least one exact, inexact, and partial match between the at least one word of the utterance and at least one term within the template hierarchy to select and populate a template or set of templates corresponding to the utterance. The populated template or set of templates may then be used to generate a narrative template or a report template.




r

Speaker recognition from telephone calls

The present invention relates to a method for speaker recognition, comprising the steps of obtaining and storing speaker information for at least one target speaker; obtaining a plurality of speech samples from a plurality of telephone calls from at least one unknown speaker; classifying the speech samples according to the at least one unknown speaker thereby providing speaker-dependent classes of speech samples; extracting speaker information for the speech samples of each of the speaker-dependent classes of speech samples; combining the extracted speaker information for each of the speaker-dependent classes of speech samples; comparing the combined extracted speaker information for each of the speaker-dependent classes of speech samples with the stored speaker information for the at least one target speaker to obtain at least one comparison result; and determining whether one of the at least one unknown speakers is identical with the at least one target speaker based on the at least one comparison result.




r

System, method and program product for providing automatic speech recognition (ASR) in a shared resource environment

A speech recognition system, method of recognizing speech and a computer program product therefor. A client device identified with a context for an associated user selectively streams audio to a provider computer, e.g., a cloud computer. Speech recognition receives streaming audio, maps utterances to specific textual candidates and determines a likelihood of a correct match for each mapped textual candidate. A context model selectively winnows candidate to resolve recognition ambiguity according to context whenever multiple textual candidates are recognized as potential matches for the same mapped utterance. Matches are used to update the context model, which may be used for multiple users in the same context.




r

Language model creation device

This device 301 stores a first content-specific language model representing a probability that a specific word appears in a word sequence representing a first content, and a second content-specific language model representing a probability that the specific word appears in a word sequence representing a second content. Based on a first probability parameter representing a probability that a content represented by a target word sequence included in a speech recognition hypothesis generated by a speech recognition process of recognizing a word sequence corresponding to a speech, a second probability parameter representing a probability that the content represented by the target word sequence is a second content, the first content-specific language model and the second content-specific language model, the device creates a language model representing a probability that the specific word appears in a word sequence corresponding to a part corresponding to the target word sequence of the speech.




r

Biometric voice command and control switching device and method of use

A biometric voice command and control switching device has a microphone assembly for receiving a currently spoken challenge utterance and a reference utterance, and a voice processing circuit for creating electronic signals indicative thereof. The device further includes a memory for storing the electronic signals, and a processor for comparing the electronic signals to determine if there is a match. If there is a match, an interface circuit enables the operable control of the controlled device.




r

Low power activation of a voice activated device

In a mobile device, a bone conduction or vibration sensor is used to detect the user's speech and the resulting output is used as the source for a low power Voice Trigger (VT) circuit that can activate the Automatic Speech Recognition (ASR) of the host device. This invention is applicable to mobile devices such as wearable computers with head mounted displays, mobile phones and wireless headsets and headphones which use speech recognition for the entering of input commands and control. The speech sensor can be a bone conduction microphone used to detect sound vibrations in the skull, or a vibration sensor, used to detect sound pressure vibrations from the user's speech. This VT circuit can be independent of any audio components of the host device and can therefore be designed to consume ultra-low power. Hence, this VT circuit can be active when the host device is in a sleeping state and can be used to wake the host device on detection of speech from the user. This VT circuit will be resistant to outside noise and react solely to the user's voice.




r

Messaging response system providing translation and conversion written language into different spoken language

A messaging response system is disclosed wherein a service providing system provides services to users via messaging communications. In accordance with an exemplary embodiment of the present invention, multiple respondents servicing users through messaging communications may appear to simultaneously use a common “screen name” identifier.




r

Speech recognition and synthesis utilizing context dependent acoustic models containing decision trees

A speech recognition method including the steps of receiving a speech input from a known speaker of a sequence of observations and determining the likelihood of a sequence of words arising from the sequence of observations using an acoustic model. The acoustic model has a plurality of model parameters describing probability distributions which relate a word or part thereof to an observation and has been trained using first training data and adapted using second training data to said speaker. The speech recognition method also determines the likelihood of a sequence of observations occurring in a given language using a language model and combines the likelihoods determined by the acoustic model and the language model and outputs a sequence of words identified from said speech input signal. The acoustic model is context based for the speaker, the context based information being contained in the model using a plurality of decision trees and the structure of the decision trees is based on second training data.




r

Systems, methods, and apparatus for gain factor attenuation

A method of signal processing according to one embodiment includes calculating an envelope of a first signal that is based on a low-frequency portion of a speech signal, calculating an envelope of a second signal that is based on a high-frequency portion of the speech signal, and calculating a plurality of gain factor values according to a time-varying relation between the envelopes of the first and second signal. The method includes attenuating, based on a variation over time of a relation between the envelopes of the first and second signals, at least one of the plurality of gain factor values. In one example, the variation over time of a relation between the envelopes is indicated by at least one distance among the plurality of gain factor values.




r

Multi-resolution switched audio encoding/decoding scheme

An audio encoder for encoding an audio signal has a first coding branch, the first coding branch comprising a first converter for converting a signal from a time domain into a frequency domain. Furthermore, the audio encoder has a second coding branch comprising a second time/frequency converter. Additionally, a signal analyzer for analyzing the audio signal is provided. The signal analyzer, on the hand, determines whether an audio portion is effective in the encoder output signal as a first encoded signal from the first encoding branch or as a second encoded signal from a second encoding branch. On the other hand, the signal analyzer determines a time/frequency resolution to be applied by the converters when generating the encoded signals. An output interface includes, in addition to the first encoded signal and the second encoded signal, a resolution information identifying the resolution used by the first time/frequency converter and used by the second time/frequency converter.




r

Audio signal decoder, time warp contour data provider, method and computer program

An audio signal decoder has a time warp contour calculator, a time warp contour data rescaler and a warp decoder. The time warp contour calculator is configured to generate time warp contour data repeatedly restarting from a predetermined time warp contour start value, based on time warp contour evolution information describing a temporal evolution of the time warp contour. The time warp contour data rescaler is configured to rescale at least a portion of the time warp contour data such that a discontinuity at a restart is avoided, reduced or eliminated in a rescaled version of the time warp contour. The warp decoder is configured to provide the decoded audio signal representation, based on an encoded audio signal representation and using the rescaled version of the time warp contour.




r

Image-based character recognition

Various embodiments enable a device to perform tasks such as processing an image to recognize and locate text in the image, and providing the recognized text an application executing on the device for performing a function (e.g., calling a number, opening an internet browser, etc.) associated with the recognized text. In at least one embodiment, processing the image includes substantially simultaneously or concurrently processing the image with at least two recognition engines, such as at least two optical character recognition (OCR) engines, running in a multithreaded mode. In at least one embodiment, the recognition engines can be tuned so that their respective processing speeds are roughly the same. Utilizing multiple recognition engines enables processing latency to be close to that of using only one recognition engine.




r

Multilingual electronic transfer dictionary containing topical codes and method of use

A multilingual electronic transfer dictionary provides for automatic topic disambiguation by including one or more topic codes in definitions contained the dictionary. Automatic topic disambiguation is accomplished by determining the frequencies of topic codes within a block of text. Dictionary entries having more frequently occurring topic codes are preferentially selected over those having less frequently occurring topic codes. When the topic codes are members of a hierarchical topical coding system, such as the International Patent Classification system, an iterative method can be used with starts with a coarser level of the coding system and is repeated at finer levels until an ambiguity is resolved. The dictionary is advantageously used for machine translation, e.g. between Japanese and English.




r

Apparatus and method for encoding and decoding an audio signal using an aligned look-ahead portion

An apparatus for encoding an audio signal having a stream of audio samples has: a windower for applying a prediction coding analysis window to the stream of audio samples to obtain windowed data for a prediction analysis and for applying a transform coding analysis window to the stream of audio samples to obtain windowed data for a transform analysis, wherein the transform coding analysis window is associated with audio samples within a current frame of audio samples and with audio samples of a predefined portion of a future frame of audio samples being a transform-coding look-ahead portion, wherein the prediction coding analysis window is associated with at least the portion of the audio samples of the current frame and with audio samples of a predefined portion of the future frame being a prediction coding look-ahead portion, wherein the transform coding look-ahead portion and the prediction coding look-ahead portion are identically to each other or are different from each other by less than 20%; and an encoding processor for generating prediction coded data or for generating transform coded data.




r

Method, system, and computer readable medium for creating clusters of text in an electronic document

Disclosed herein are systems and methods for navigating electronic texts. According to an aspect, a method may include determining text subgroups within an electronic text. The method may also include selecting a text seed within one of the text subgroups. Further, the method may include determining a similarity relationship between the text seed and one or more adjacent text subgroups that do not include the selected text seed. The method may also include associating the text seed with the one or more adjacent text subgroups based on the similarity relationship to create a text cluster.




r

Time warp contour calculator, audio signal encoder, encoded audio signal representation, methods and computer program

A time warp contour calculator for use in an audio signal decoder receives an encoded warp ratio information, derives a sequence of warp ratio values from the encoded warp ratio information, and obtains warp contour node values starting from a time warp contour start value. Ratios between the time warp contour node values and the time warp contour starting value are determined by the warp ratio values. The time warp contour calculator computes a time warp contour node value of a given time warp contour node, on the basis of a product-formation having a ratio between the time warp contour node values of the intermediate time warp contour node and the time warp contour starting value and a ratio between the time warp contour node values of the given time warp contour node and of the intermediate time warp contour node as factors.




r

Error concealment method and apparatus for audio signal and decoding method and apparatus for audio signal using the same

An error concealment method and apparatus for an audio signal and a decoding method and apparatus for an audio signal using the error concealment method and apparatus. The error concealment method includes selecting one of an error concealment in a frequency domain and an error concealment in a time domain as an error concealment scheme for a current frame based on a predetermined criteria when an error occurs in the current frame, selecting one of a repetition scheme and an interpolation scheme in the frequency domain as the error concealment scheme for the current frame based on a predetermined criteria when the error concealment in the frequency domain is selected, and concealing the error of the current frame using the selected scheme.




r

Methods and systems for creating a shaped playlist

Methods and systems are described for generating media playlists, or selecting a media asset, according to a “shape” selected by a user. Specifically, a user may “shape” the playlist by designating specific sub-categories of media assets that should be presented at selected times in the playlist. The media application then interpolates the sub-categories for a media asset between the selected times such that adjacent media assets have smooth categorical transitions (e.g., feature incremental changes in the range of sub-categories).




r

Voice commands for online social networking systems

In one embodiment, a method includes accessing a social graph that includes a plurality of nodes and edges, receiving from a first user a voice message comprising one or more commands, receiving location information associated with the first user, identifying edges and nodes in the social graph based on the location information, where each of the identified edges and nodes corresponds to at least one of the commands of the voice message, and generating new nodes or edges in the social graph based on the identified nodes or identified edges.




r

Method for configuring displets for an interactive platform for analyzing a computer network performance

A method for configuring an interactive platform for monitoring the performance and the quality of a computer network, the monitoring data being suitable to be displayed on a dynamic page of type webpage in a form of graphic components called “displets”; including providing, on the interactive platform a configuration interface in which are defined, for at least one given user, filtering criteria for displaying displets, the criteria being defined in the form of parameters for configuring the rights of the at least one user.




r

Service distribution device and service display device

A service distribution device is provided that, when acquiring services to be used in an information terminal mounted in a vehicle or used in its passenger compartment, recognizes service availability beforehand, thereby improving usability of the services. A service distribution device includes an information correlation unit for correlating information that denotes service utilization conditions in relation to travel condition of the vehicle with the services. The service distribution device distributes to an information terminal the information that denotes the service utilization conditions correlated by the information correlation unit along with contents of the relevant service so that the information and the contents can be visibly displayed on a display unit in the information terminal.




r

Configurable viewcube controller

A method, apparatus, system, and computer program product provide the ability to display representative properties of a three-dimensional scene view. A 3D scene and a 3D representation of a coordinate system of the 3D scene are displayed. Different faces of the 3D representation represent and correspond to different viewpoints of the 3D scene. Different statistics for features of the 3D scene are reflected on the different faces of the 3D representation based on the viewpoint corresponding to each face. Manipulation of the 3D representation identifies and selects a different viewpoint of the 3D scene which is then reoriented accordingly.




r

Information processing apparatus, information processing system, information processing apparatus control method, and storage medium

An information processing apparatus according to this invention, being capable of communicating with a Web server via a network, receives from the Web server a response to a processing request issued to a Web application of the Web server. The information processing apparatus changes, when screen control information described in a header of the response contains information which designates priority of a screen display by a Web browser of the information processing apparatus, priority of the screen display by the Web browser to the designated priority. When an event to display a screen other than a screen by the Web browser occurs while the Web browser presents a screen display corresponding to the response, the information processing apparatus inhibits an interrupt display by the event in order for the designated priority.




r

Electronic device and method for providing menu using the same

An electronic device and a method for providing a menu are disclosed. The electronic device displays a plurality of words on the screen, detects a first action specifying selection of at least one of the displayed words, searches an option item related to the word selected by the first action, generates at least one menu item based on the selected word and the searched option item, and displays the generated menu item in a first area of the screen.




r

Switch control in report generation

In one embodiment, a view in a graphical user interface includes a selection area that includes identifiers associated with a plurality of attributes, each of the attributes having a plurality of possible values. The area further includes one or more graphical tools to define filter criteria based at least in part on selected ones of the plurality of possible values of one or more of the attributes. The area further includes one or more switch controls each being associated with a respective one of the one or more of the attributes and indicating presentation criteria including: whether selected ones of the possible values of the respective attribute are to be shown in a report, and a dimension of the report in which to space the selected ones of the possible values from one another if the selected ones of the possible values are to be shown in the report.




r

Mirrored file manager

A file managing software program for managing a list of elements in a specific sequence in a first file of a computer program, including the steps of copying the first file to form a second file having an identical list of elements as the first file. The user is then permitted to rearrange the sequence of the elements of the second file independently of the sequence of the first file. A display of both the first and the second file list elements is provided to the user. Further embodiments allow the user to categorize, prioritize, and order according to users specified rules of how the second file element list is organized and displayed to provide a more convenient and flexible presentation of the file contents.




r

System and method for simultaneous display of multiple information sources

A computerized method of presenting information from a variety of sources on a display device. Specifically the present invention describes a graphical user interface for organizing the simultaneous display of information from a multitude of information sources. In particular, the present invention comprises a graphical user interface which organizes content from a variety of information sources into a grid of tiles, each of which can refresh its content independently of the others. The grid functionality manages the refresh rates of the multiple information sources. The present invention is intended to operate in a platform independent manner.




r

Information processing apparatus for displaying screen information acquired from an outside device in a designated color

An information processing apparatus configured to display a user interface on a display unit according to screen information acquired from an outside device changes the screen information according to a display attribute set by a user, and if setting of a display attribute of an object included in the screen information is unchangeable, color conversion processing of a specified object included in the screen information is performed and the screen information obtained by executing conversion processing according to the display attribute set by the user with respect to the screen information including the object which has undergone the color conversion processing is displayed.




r

Adaptive user interface for widescreen devices

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for adapting user interfaces for devices that include widescreen displays. In one aspect, a method includes determining a size characteristic of a display of a mobile device, determining a size characteristic of content that is to be displayed on the display, and comparing the size characteristic of the content to the size characteristic of the display. The method also includes selecting one or more controls to display in a portion of the display that is not to be used to display the content based on comparing the size characteristic of the content to the size characteristic of the display, displaying the content, and displaying the selected controls in a portion of the display that is not used to display the content.




r

Alert event notification

Alert event notifications may be provided by: displaying a first user interface layer including at least one user interface element configured to provide an alert event notification; displaying a second user interface layer such that at least a portion of the second user interface layer overlays the at least one user interface element configured to provide an alert event notification; detecting an alert event; and at least partially displaying the at least one user interface element configured to provide an alert event notification in an area where the at least a portion of the second user interface layer overlays the at least one user interface element configured to provide an alert event notification.




r

Methods and apparatus to create process control graphics based on process control information

Methods and apparatus to automatically link process control graphics to process control algorithm information are described. An example method involves displaying a first process control image including process control algorithm information and displaying adjacent to the first process control image a second process control image to include process control graphics. The method automatically links at least some of the process control algorithm information to a graphic in the second process control image in response to user inputs associated with the first and second process control images.




r

System and method for applying a text prediction algorithm to a virtual keyboard

An electronic device for text prediction in a virtual keyboard. The device includes a memory including an input determination module for execution by the microprocessor, the input determination module being configured to: receive signals representing input at the virtual keyboard, the virtual keyboard being divided into a plurality of subregions, the plurality of subregions including at least one subregion being associated with two or more characters and/or symbols of the virtual keyboard; identify a subregion on the virtual keyboard corresponding to the input; determine any character or symbol associated with the identified subregion; and if there is at least one determined character or symbol, provide the at least one determined character or symbol to a text prediction algorithm.




r

System and method for managing and displaying securities market information

A message screen display comprises a static non-scrollable display area for display of at least part of a first message, the first message having an associated first message time. The message screen display further comprises a scrollable display area for display of at least part of a second message, the second message having an associated second message time. The message screen display further comprises a feature applied to at least part of the first message that varies based on time as referenced to the associated first message time.




r

Post selection mouse pointer location

A technique is provided for post selection location of a mouse pointer icon in a display screen of a computing device. A software tool receives input of the post selection location for the mouse pointer icon. The post selection location defines a default location to move the mouse pointer icon in response to a window action taken on a window displayed in the display screen. In response to the window action in which the mouse pointer icon is initially displayed at a selection location corresponding to the window action, the mouse pointer icon is moved to the post selection location such that the mouse pointer icon is displayed at the post selection location in the display screen.




r

Vehicular manipulation apparatus

A remote manipulation apparatus includes a main body and a manipulating handle manipulated by a user to move to cover all the orientations from a manipulation basis position defined on a basis of the main body. Movement of the manipulating handle relative to the manipulation basis position corresponds to movement of a pointer image relative to a screen basis position on a screen of a display apparatus. An auxiliary navigational display window includes a specified button image assigned with pointer-pulling information. When the auxiliary navigational display window appears on the screen, the manipulating handle is automatically driven to a position that corresponds to a position of the specified button image on the screen so that the pointer image is moved onto the specified button image that is assigned with the pointer-pulling information.




r

User interfaces for displaying relationships between cells in a grid

User interfaces for displaying relationships between cells in a grid. In one example embodiment, a user interface includes a grid including rows and columns and a plurality of cells each having a specific position in the grid. A first one of the cells is related to a second one of the cells. The grid is configured to display, upon selection of the first cell or second cell, a visual representation of the relationship between the first cell and the second cell.




r

Representation of overlapping visual entities

Various embodiments present a combined visual entity that represents overlapping visual entities. The combined visual entity can include a primary visualization that represents one of the overlapping visual entities and annotations that represent others of the overlapping visual entities. For example, a map view can include multiple geographical entities that overlap. A primary visualization can be rendered that represents one of the multiple geographical entities. The primary visualization can be visually annotated (e.g., with symbols, letters, or other visual indicators) to indicate others of the multiple geographical entities. In some embodiments, a zoom operation can cause visual entities to be added and/or removed from the combined visual entity.