io

Sound localization for user in motion

Methods, apparatus, and computer programs for simulating the source of sound are provided. One method includes operations for determining a location in space of the head of a user utilizing face recognition of images of the user. Further, the method includes an operation for determining a sound for two speakers, and an operation for determining an emanating location in space for the sound, each speaker being associated with one ear of the user. The acoustic signals for each speaker are established based on the location in space of the head, the sound, the emanating location in space, and the auditory characteristics of the user. In addition, the acoustic signals are transmitted to the two speakers. When the acoustic signals are played by the two speakers, the acoustic signals simulate that the sound originated at the emanating location in space.




io

Automated communication integrator

An apparatus includes a plurality of applications and an integrator having a voice recognition module configured to identify at least one voice command from a user. The integrator is configured to integrate information from a remote source into at least one of the plurality of applications based on the identified voice command. A method includes analyzing speech from a first user of a first mobile device having a plurality of applications, identifying a voice command based on the analyzed speech using a voice recognition module, and incorporating information from the remote source into at least one of a plurality of applications based on the identified voice command.




io

Script compliance and quality assurance based on speech recognition and duration of interaction

Apparatus and methods are provided for using automatic speech recognition to analyze a voice interaction and verify compliance of an agent reading a script to a client during the voice interaction. In one aspect of the invention, a communications system includes a user interface, a communications network, and a call center having an automatic speech recognition component. In other aspects of the invention, a script compliance method includes the steps of conducting a voice interaction between an agent and a client and evaluating the voice interaction with an automatic speech recognition component adapted to analyze the voice interaction and determine whether the agent has adequately followed the script. In yet still further aspects of the invention, the duration of a given interaction can be analyzed, either apart from or in combination with the script compliance analysis above, to seek to identify instances of agent non-compliance, of fraud, or of quality-analysis issues.




io

Method and system for facilitating communications for a user transaction

Current human-to-machine interfaces enable users to interact with a company's database and enter into a series of transactions (e.g., purchasing products/services and paying bills). Each transaction may require several operations or stages requiring user input or interaction. Some systems enable a user to enter a voice input parameter providing multiple operations of instruction (e.g., single natural language command). However, users of such a system do not know what types of commands the system is capable of accepting. Embodiments of the present invention facilitate communications for user transactions by determining a user's goal transaction and presenting a visual representation of a voice input parameter for the goal transaction. The use of visual representations notifies the user of the system's capability of accepting single natural language commands and the types of commands the system is capable of accepting, thereby enabling a user to complete a transaction in a shorter period of time.




io

Using a physical phenomenon detector to control operation of a speech recognition engine

A device may include a physical phenomenon detector. The physical phenomenon detector may detect a physical phenomenon related to the device. In response to detecting the physical phenomenon, the device may record audio data that includes speech. The speech may be transcribed with a speech recognition engine. The speech recognition engine may be included in the device, or may be included with a remote computing device with which the device may communicate.




io

Method for classifying audio signal into fast signal or slow signal

Low bit rate audio coding such as BWE algorithm often encounters conflict goal of achieving high time resolution and high frequency resolution at the same time. In order to achieve best possible quality, input signal can be first classified into fast signal and slow signal. This invention focuses on classifying signal into fast signal and slow signal, based on at least one of the following parameters or a combination of the following parameters: spectral sharpness, temporal sharpness, pitch correlation (pitch gain), and/or spectral envelope variation. This classification information can help to choose different BWE algorithms, different coding algorithms, and different postprocessing algorithms respectively for fast signal and slow signal.




io

Apparatus for processing an audio signal and method thereof

An apparatus for processing an audio signal and method thereof are disclosed. The present invention includes receiving a downmix signal and side information; extracting control restriction information from the side information; receiving control information for controlling gain or panning at least one object signal; generating at least one of first multi-channel information and first downmix processing information based on the control information and object information, without using the control restriction information; and, generating an output signal by applying the at least one of the first multichannel information and the first downmix processing information to the downmix signal, wherein the control restriction information relates to a parameter indicating limiting degree of the control information.




io

Sparse audio

A method comprising: sampling received audio at a first rate to produce a first audio signal; transforming the first audio signal into a sparse domain to produce a sparse audio signal; re-sampling of the sparse audio signal to produce a re-sampled sparse audio signal; and providing the re-sampled sparse audio signal, wherein bandwidth required for accurate audio reproduction is removed but bandwidth required for spatial audio encoding is retained AND/OR a method comprising: receiving a first sparse audio signal for a first channel; receiving a second sparse audio signal for a second channel; and processing the first sparse audio signal and the second sparse audio signal to produce one or more inter-channel spatial audio parameters.




io

Audio controlling apparatus, audio correction apparatus, and audio correction method

According to one embodiment, an audio controlling apparatus includes a first receiver configured to receive audio signal, a second receiver configured to receive environmental sound, a temporary gain calculator configured to calculate temporary gain based on environmental sound received by second receiver, a sound type determination module configured to determine sound type of main component of audio signal received by first receiver, and a gain controller configured to stabilize temporary gain that is calculated by temporary gain calculator and set gain, when it is determined that sound type of main component of audio signal received by first receiver is music.




io

Extracting information from unstructured text using generalized extraction patterns

Methods, systems, and apparatus, including computer program products, for extracting information from unstructured text. Fact pairs are used to extract basic patterns from a body of text. Patterns are generalized by replacing words with classes of similar words. Generalized patterns are used to extract further fact pairs from the body of text. The process can begin with fact pairs, basic patterns, or generalized patterns.




io

Text suggestion

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for automatic text suggestion are described. One of the methods includes receiving a text item including one or more terms; determining a plurality of text strings, each text string including a matching portion and one or more suffixes, wherein the matching portion matches the text item, and the one or more suffixes are located after the matching portion; ranking the one or more suffixes based on a credibility score and a frequency score of each suffix, the credibility score indicating an estimated credibility of a source of the text string including the suffix, the frequency score indicating an estimated frequency of appearance of the suffix; and providing a group of the one or more suffixes that includes a highest ranking suffix for display as a suggestion for completing a sentence starting from the text item.




io

Manner of pronunciation-influenced search results

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating search results. In one aspect, a method includes obtaining a transcription of a voice query, and data that identifies an accent of the voice query, submitting the transcription and the data that identifies the accent of the voice query to a search engine to generate one or more accent-influenced results of the voice query, and providing the accent-influenced results to a client device for display.




io

Method and apparatus for processing audio frames to transition between different codecs

A method (700, 800) and apparatus (100, 200) processes audio frames to transition between different codecs. The method can include producing (720), using a first coding method, a first frame of coded output audio samples by coding a first audio frame in a sequence of frames. The method can include forming (730) an overlap-add portion of the first frame using the first coding method. The method can include generating (740) a combination first frame of coded audio samples based on combining the first frame of coded output audio samples with the overlap-add portion of the first frame. The method can include initializing (760) a state of a second coding method based on the combination first frame of coded audio samples. The method can include constructing (770) an output signal based on the initialized state of the second coding method.




io

Audio encoder, audio decoder, methods for encoding and decoding an audio signal, and a computer program

An encoder for providing an audio stream on the basis of a transform-domain representation of an input audio signal includes a quantization error calculator configured to determine a multi-band quantization error over a plurality of frequency bands of the input audio signal for which separate band gain information is available. The encoder also includes an audio stream provider for providing the audio stream such that the audio stream includes information describing an audio content of the frequency bands and information describing the multi-band quantization error. A decoder for providing a decoded representation of an audio signal on the basis of an encoded audio stream representing spectral components of frequency bands of the audio signal includes a noise filler for introducing noise into spectral components of a plurality of frequency bands to which separate frequency band gain information is associated on the basis of a common multi-band noise intensity value.




io

Thought recollection and speech assistance device

Some embodiments of the inventive subject matter include a method for detecting speech loss and supplying appropriate recollection data to the user. Such embodiments include detecting a speech stream from a user, converting the speech stream to text, storing the text, detecting an interruption to the speech stream, wherein the interruption to the speech stream indicates speech loss by the user, searching a catalog using the text as a search parameter to find relevant catalog data and, presenting the relevant catalog data to remind the user about the speech stream.




io

Speaker recognition from telephone calls

The present invention relates to a method for speaker recognition, comprising the steps of obtaining and storing speaker information for at least one target speaker; obtaining a plurality of speech samples from a plurality of telephone calls from at least one unknown speaker; classifying the speech samples according to the at least one unknown speaker thereby providing speaker-dependent classes of speech samples; extracting speaker information for the speech samples of each of the speaker-dependent classes of speech samples; combining the extracted speaker information for each of the speaker-dependent classes of speech samples; comparing the combined extracted speaker information for each of the speaker-dependent classes of speech samples with the stored speaker information for the at least one target speaker to obtain at least one comparison result; and determining whether one of the at least one unknown speakers is identical with the at least one target speaker based on the at least one comparison result.




io

System, method and program product for providing automatic speech recognition (ASR) in a shared resource environment

A speech recognition system, method of recognizing speech and a computer program product therefor. A client device identified with a context for an associated user selectively streams audio to a provider computer, e.g., a cloud computer. Speech recognition receives streaming audio, maps utterances to specific textual candidates and determines a likelihood of a correct match for each mapped textual candidate. A context model selectively winnows candidate to resolve recognition ambiguity according to context whenever multiple textual candidates are recognized as potential matches for the same mapped utterance. Matches are used to update the context model, which may be used for multiple users in the same context.




io

Language model creation device

This device 301 stores a first content-specific language model representing a probability that a specific word appears in a word sequence representing a first content, and a second content-specific language model representing a probability that the specific word appears in a word sequence representing a second content. Based on a first probability parameter representing a probability that a content represented by a target word sequence included in a speech recognition hypothesis generated by a speech recognition process of recognizing a word sequence corresponding to a speech, a second probability parameter representing a probability that the content represented by the target word sequence is a second content, the first content-specific language model and the second content-specific language model, the device creates a language model representing a probability that the specific word appears in a word sequence corresponding to a part corresponding to the target word sequence of the speech.




io

Biometric voice command and control switching device and method of use

A biometric voice command and control switching device has a microphone assembly for receiving a currently spoken challenge utterance and a reference utterance, and a voice processing circuit for creating electronic signals indicative thereof. The device further includes a memory for storing the electronic signals, and a processor for comparing the electronic signals to determine if there is a match. If there is a match, an interface circuit enables the operable control of the controlled device.




io

Low power activation of a voice activated device

In a mobile device, a bone conduction or vibration sensor is used to detect the user's speech and the resulting output is used as the source for a low power Voice Trigger (VT) circuit that can activate the Automatic Speech Recognition (ASR) of the host device. This invention is applicable to mobile devices such as wearable computers with head mounted displays, mobile phones and wireless headsets and headphones which use speech recognition for the entering of input commands and control. The speech sensor can be a bone conduction microphone used to detect sound vibrations in the skull, or a vibration sensor, used to detect sound pressure vibrations from the user's speech. This VT circuit can be independent of any audio components of the host device and can therefore be designed to consume ultra-low power. Hence, this VT circuit can be active when the host device is in a sleeping state and can be used to wake the host device on detection of speech from the user. This VT circuit will be resistant to outside noise and react solely to the user's voice.




io

Messaging response system providing translation and conversion written language into different spoken language

A messaging response system is disclosed wherein a service providing system provides services to users via messaging communications. In accordance with an exemplary embodiment of the present invention, multiple respondents servicing users through messaging communications may appear to simultaneously use a common “screen name” identifier.




io

Speech recognition and synthesis utilizing context dependent acoustic models containing decision trees

A speech recognition method including the steps of receiving a speech input from a known speaker of a sequence of observations and determining the likelihood of a sequence of words arising from the sequence of observations using an acoustic model. The acoustic model has a plurality of model parameters describing probability distributions which relate a word or part thereof to an observation and has been trained using first training data and adapted using second training data to said speaker. The speech recognition method also determines the likelihood of a sequence of observations occurring in a given language using a language model and combines the likelihoods determined by the acoustic model and the language model and outputs a sequence of words identified from said speech input signal. The acoustic model is context based for the speaker, the context based information being contained in the model using a plurality of decision trees and the structure of the decision trees is based on second training data.




io

Systems, methods, and apparatus for gain factor attenuation

A method of signal processing according to one embodiment includes calculating an envelope of a first signal that is based on a low-frequency portion of a speech signal, calculating an envelope of a second signal that is based on a high-frequency portion of the speech signal, and calculating a plurality of gain factor values according to a time-varying relation between the envelopes of the first and second signal. The method includes attenuating, based on a variation over time of a relation between the envelopes of the first and second signals, at least one of the plurality of gain factor values. In one example, the variation over time of a relation between the envelopes is indicated by at least one distance among the plurality of gain factor values.




io

Multi-resolution switched audio encoding/decoding scheme

An audio encoder for encoding an audio signal has a first coding branch, the first coding branch comprising a first converter for converting a signal from a time domain into a frequency domain. Furthermore, the audio encoder has a second coding branch comprising a second time/frequency converter. Additionally, a signal analyzer for analyzing the audio signal is provided. The signal analyzer, on the hand, determines whether an audio portion is effective in the encoder output signal as a first encoded signal from the first encoding branch or as a second encoded signal from a second encoding branch. On the other hand, the signal analyzer determines a time/frequency resolution to be applied by the converters when generating the encoded signals. An output interface includes, in addition to the first encoded signal and the second encoded signal, a resolution information identifying the resolution used by the first time/frequency converter and used by the second time/frequency converter.




io

Audio signal decoder, time warp contour data provider, method and computer program

An audio signal decoder has a time warp contour calculator, a time warp contour data rescaler and a warp decoder. The time warp contour calculator is configured to generate time warp contour data repeatedly restarting from a predetermined time warp contour start value, based on time warp contour evolution information describing a temporal evolution of the time warp contour. The time warp contour data rescaler is configured to rescale at least a portion of the time warp contour data such that a discontinuity at a restart is avoided, reduced or eliminated in a rescaled version of the time warp contour. The warp decoder is configured to provide the decoded audio signal representation, based on an encoded audio signal representation and using the rescaled version of the time warp contour.




io

Image-based character recognition

Various embodiments enable a device to perform tasks such as processing an image to recognize and locate text in the image, and providing the recognized text an application executing on the device for performing a function (e.g., calling a number, opening an internet browser, etc.) associated with the recognized text. In at least one embodiment, processing the image includes substantially simultaneously or concurrently processing the image with at least two recognition engines, such as at least two optical character recognition (OCR) engines, running in a multithreaded mode. In at least one embodiment, the recognition engines can be tuned so that their respective processing speeds are roughly the same. Utilizing multiple recognition engines enables processing latency to be close to that of using only one recognition engine.




io

Multilingual electronic transfer dictionary containing topical codes and method of use

A multilingual electronic transfer dictionary provides for automatic topic disambiguation by including one or more topic codes in definitions contained the dictionary. Automatic topic disambiguation is accomplished by determining the frequencies of topic codes within a block of text. Dictionary entries having more frequently occurring topic codes are preferentially selected over those having less frequently occurring topic codes. When the topic codes are members of a hierarchical topical coding system, such as the International Patent Classification system, an iterative method can be used with starts with a coarser level of the coding system and is repeated at finer levels until an ambiguity is resolved. The dictionary is advantageously used for machine translation, e.g. between Japanese and English.




io

Apparatus and method for encoding and decoding an audio signal using an aligned look-ahead portion

An apparatus for encoding an audio signal having a stream of audio samples has: a windower for applying a prediction coding analysis window to the stream of audio samples to obtain windowed data for a prediction analysis and for applying a transform coding analysis window to the stream of audio samples to obtain windowed data for a transform analysis, wherein the transform coding analysis window is associated with audio samples within a current frame of audio samples and with audio samples of a predefined portion of a future frame of audio samples being a transform-coding look-ahead portion, wherein the prediction coding analysis window is associated with at least the portion of the audio samples of the current frame and with audio samples of a predefined portion of the future frame being a prediction coding look-ahead portion, wherein the transform coding look-ahead portion and the prediction coding look-ahead portion are identically to each other or are different from each other by less than 20%; and an encoding processor for generating prediction coded data or for generating transform coded data.




io

Time warp contour calculator, audio signal encoder, encoded audio signal representation, methods and computer program

A time warp contour calculator for use in an audio signal decoder receives an encoded warp ratio information, derives a sequence of warp ratio values from the encoded warp ratio information, and obtains warp contour node values starting from a time warp contour start value. Ratios between the time warp contour node values and the time warp contour starting value are determined by the warp ratio values. The time warp contour calculator computes a time warp contour node value of a given time warp contour node, on the basis of a product-formation having a ratio between the time warp contour node values of the intermediate time warp contour node and the time warp contour starting value and a ratio between the time warp contour node values of the given time warp contour node and of the intermediate time warp contour node as factors.




io

Error concealment method and apparatus for audio signal and decoding method and apparatus for audio signal using the same

An error concealment method and apparatus for an audio signal and a decoding method and apparatus for an audio signal using the error concealment method and apparatus. The error concealment method includes selecting one of an error concealment in a frequency domain and an error concealment in a time domain as an error concealment scheme for a current frame based on a predetermined criteria when an error occurs in the current frame, selecting one of a repetition scheme and an interpolation scheme in the frequency domain as the error concealment scheme for the current frame based on a predetermined criteria when the error concealment in the frequency domain is selected, and concealing the error of the current frame using the selected scheme.




io

Service distribution device and service display device

A service distribution device is provided that, when acquiring services to be used in an information terminal mounted in a vehicle or used in its passenger compartment, recognizes service availability beforehand, thereby improving usability of the services. A service distribution device includes an information correlation unit for correlating information that denotes service utilization conditions in relation to travel condition of the vehicle with the services. The service distribution device distributes to an information terminal the information that denotes the service utilization conditions correlated by the information correlation unit along with contents of the relevant service so that the information and the contents can be visibly displayed on a display unit in the information terminal.




io

Information processing apparatus, information processing system, information processing apparatus control method, and storage medium

An information processing apparatus according to this invention, being capable of communicating with a Web server via a network, receives from the Web server a response to a processing request issued to a Web application of the Web server. The information processing apparatus changes, when screen control information described in a header of the response contains information which designates priority of a screen display by a Web browser of the information processing apparatus, priority of the screen display by the Web browser to the designated priority. When an event to display a screen other than a screen by the Web browser occurs while the Web browser presents a screen display corresponding to the response, the information processing apparatus inhibits an interrupt display by the event in order for the designated priority.




io

Switch control in report generation

In one embodiment, a view in a graphical user interface includes a selection area that includes identifiers associated with a plurality of attributes, each of the attributes having a plurality of possible values. The area further includes one or more graphical tools to define filter criteria based at least in part on selected ones of the plurality of possible values of one or more of the attributes. The area further includes one or more switch controls each being associated with a respective one of the one or more of the attributes and indicating presentation criteria including: whether selected ones of the possible values of the respective attribute are to be shown in a report, and a dimension of the report in which to space the selected ones of the possible values from one another if the selected ones of the possible values are to be shown in the report.




io

System and method for simultaneous display of multiple information sources

A computerized method of presenting information from a variety of sources on a display device. Specifically the present invention describes a graphical user interface for organizing the simultaneous display of information from a multitude of information sources. In particular, the present invention comprises a graphical user interface which organizes content from a variety of information sources into a grid of tiles, each of which can refresh its content independently of the others. The grid functionality manages the refresh rates of the multiple information sources. The present invention is intended to operate in a platform independent manner.




io

Information processing apparatus for displaying screen information acquired from an outside device in a designated color

An information processing apparatus configured to display a user interface on a display unit according to screen information acquired from an outside device changes the screen information according to a display attribute set by a user, and if setting of a display attribute of an object included in the screen information is unchangeable, color conversion processing of a specified object included in the screen information is performed and the screen information obtained by executing conversion processing according to the display attribute set by the user with respect to the screen information including the object which has undergone the color conversion processing is displayed.




io

Alert event notification

Alert event notifications may be provided by: displaying a first user interface layer including at least one user interface element configured to provide an alert event notification; displaying a second user interface layer such that at least a portion of the second user interface layer overlays the at least one user interface element configured to provide an alert event notification; detecting an alert event; and at least partially displaying the at least one user interface element configured to provide an alert event notification in an area where the at least a portion of the second user interface layer overlays the at least one user interface element configured to provide an alert event notification.




io

Methods and apparatus to create process control graphics based on process control information

Methods and apparatus to automatically link process control graphics to process control algorithm information are described. An example method involves displaying a first process control image including process control algorithm information and displaying adjacent to the first process control image a second process control image to include process control graphics. The method automatically links at least some of the process control algorithm information to a graphic in the second process control image in response to user inputs associated with the first and second process control images.




io

Multi-lane time-synched visualizations of machine data events

A visualization can include a set of swim lanes, each swim lane representing information about an event type. An event type can be specified, e.g., as those events having certain keywords and/or having specified value(s) for specified field(s). The swim lane can plot when (within a time range) events of the associated event type occurred. Specifically, each such event can be assigned to a bucket having a bucket time matching the event time. A swim lane can extend along a timeline axis in the visualization, and the buckets can be positioned at a point along the axis that represents the bucket time. Thus, the visualization may indicate whether events were clustered at a point in time. Because the visualization can include a plurality of swim lanes, the visualization can further indicate how timing of events of a first type compare to timing of events of a second type.




io

System and method for applying a text prediction algorithm to a virtual keyboard

An electronic device for text prediction in a virtual keyboard. The device includes a memory including an input determination module for execution by the microprocessor, the input determination module being configured to: receive signals representing input at the virtual keyboard, the virtual keyboard being divided into a plurality of subregions, the plurality of subregions including at least one subregion being associated with two or more characters and/or symbols of the virtual keyboard; identify a subregion on the virtual keyboard corresponding to the input; determine any character or symbol associated with the identified subregion; and if there is at least one determined character or symbol, provide the at least one determined character or symbol to a text prediction algorithm.




io

System and method for managing and displaying securities market information

A message screen display comprises a static non-scrollable display area for display of at least part of a first message, the first message having an associated first message time. The message screen display further comprises a scrollable display area for display of at least part of a second message, the second message having an associated second message time. The message screen display further comprises a feature applied to at least part of the first message that varies based on time as referenced to the associated first message time.




io

Post selection mouse pointer location

A technique is provided for post selection location of a mouse pointer icon in a display screen of a computing device. A software tool receives input of the post selection location for the mouse pointer icon. The post selection location defines a default location to move the mouse pointer icon in response to a window action taken on a window displayed in the display screen. In response to the window action in which the mouse pointer icon is initially displayed at a selection location corresponding to the window action, the mouse pointer icon is moved to the post selection location such that the mouse pointer icon is displayed at the post selection location in the display screen.




io

Vehicular manipulation apparatus

A remote manipulation apparatus includes a main body and a manipulating handle manipulated by a user to move to cover all the orientations from a manipulation basis position defined on a basis of the main body. Movement of the manipulating handle relative to the manipulation basis position corresponds to movement of a pointer image relative to a screen basis position on a screen of a display apparatus. An auxiliary navigational display window includes a specified button image assigned with pointer-pulling information. When the auxiliary navigational display window appears on the screen, the manipulating handle is automatically driven to a position that corresponds to a position of the specified button image on the screen so that the pointer image is moved onto the specified button image that is assigned with the pointer-pulling information.




io

User interfaces for displaying relationships between cells in a grid

User interfaces for displaying relationships between cells in a grid. In one example embodiment, a user interface includes a grid including rows and columns and a plurality of cells each having a specific position in the grid. A first one of the cells is related to a second one of the cells. The grid is configured to display, upon selection of the first cell or second cell, a visual representation of the relationship between the first cell and the second cell.




io

Representation of overlapping visual entities

Various embodiments present a combined visual entity that represents overlapping visual entities. The combined visual entity can include a primary visualization that represents one of the overlapping visual entities and annotations that represent others of the overlapping visual entities. For example, a map view can include multiple geographical entities that overlap. A primary visualization can be rendered that represents one of the multiple geographical entities. The primary visualization can be visually annotated (e.g., with symbols, letters, or other visual indicators) to indicate others of the multiple geographical entities. In some embodiments, a zoom operation can cause visual entities to be added and/or removed from the combined visual entity.




io

User interface with enlarged icon display of key function

To improve the consumer experience with portable electronic devices, a user interface combines the use of capacitive sensors with tactile sensors in an input device. When a user places a finger, stylus, or other input instrument near a given key button, a capacitive sensor causes the display to display temporarily an indication of the function of that key in an enlarged format. The user may then press the associated key button to activate the desired function. In one exemplary embodiment, the capacitive sensor fixes the functionality to the function indicated in the display. In this embodiment, a tactile input applied to any key, whether the correct key, multiple keys, or a single incorrect key, results in activating the function indicated in the display as a result of the capacitive input.




io

Position editing tool of collage multi-media

In accordance with one or more embodiments of the present disclosure, methods and apparatus are provided for flexible and user-friendly position editing of loaded media in a multi-media presentation. In one embodiment, a method for editing the position of loaded media comprises loading a page of a collage document to a client device, the page having a plurality of layers with each layer being associated with a media object, and creating a list of layers of the loaded page with each layer indexed by at least a position in the collage document. The method further includes selecting a first media object, selecting a position editing tool to group the first media object and at least one other media object adjacent to the first media object; and moving the grouped first media object and the at least one other media object to a different position in the collage document. A client device for position editing loaded media is also disclosed.




io

Visualization techniques for imprecise statement completion

When a user enters text into an application, the application can utilize an auto-complete feature to provide the user with estimations as to a complete term a user is attempting to enter into the application. Visualization can be provided along with an estimation to disclose the likelihood the estimation is what the user intends to enter. Furthermore, a rationale can be provided to the user for the reason an estimation was provided to the user.




io

Apparatus and method for user input for controlling displayed information

In accordance with an example embodiment of the present invention, a method for proximity based input is provided, comprising: detecting presence of an object in close proximity to an input surface, detecting a displayed virtual layer currently associated with the object on the basis of distance of the object to the input surface, detecting a hovering input by the object, and causing a display operation to move at least a portion of the associated virtual layer in accordance with the detected hovering input.




io

Device, method, and graphical user interface for managing concurrently open software applications

A method includes displaying a first application view. A first input is detected, and an application view selection mode is entered for selecting one of concurrently open applications for display in a corresponding application view. An initial group of open application icons in a first predefined area and at least a portion of the first application view adjacent to the first predefined area are concurrently displayed. The initial group of open application icons corresponds to at least some of the concurrently open applications. A gesture is detected on a respective open application icon in the first predefined area, and a respective application view for a corresponding application is displayed without concurrently displaying an application view for any other application in the concurrently open applications. The open application icons in the first predefined area cease to be displayed, and the application view selection mode is exited.




io

Method and apparatus for aliased item selection from a list of items

The present invention introduces an aliased selection system with audible cues to allow a user of a handheld computer system locate a desired item from a list of item. The aliased selection system allows a user to spell out a desired item by activating an input that specifics a subset that containing a next letter. In one embodiment, two different subsets are used: A to M and N to Z. When the user has entered information on enough letters such that the number of possibilities fits entirely on a display screen then a first audible cue is given. The user may enter additional information on until a single list item is uniquely identified. Once a single item is uniquely identified, the system emits a second audible cue that informs the user that a single item has been specified. The aliased selection system allows a user to select a desired item from a list with a single hand and without looking at the display screen. However, the user may shorten the selection process by looking at the display screen.