visual

Paper: Evidence for Area as the Primary Visual Cue in Pie Charts

How we read pie charts is still an open question: is it angle? Is it area? Is it arc length? In a study I'm presenting as a short paper at the IEEE VIS conference in Vancouver next week, I tried to tease the visual cues apart – using modeling and 3D pie charts. The big […]




visual

The Visual Evolution of the “Flattening the Curve” Information Graphic

Communication has been quite a challenge during the COVID-19 pandemic, and data visualization hasn't been the most helpful given the low quality of the data – see Amanda Makulec's plea to think harder about making another coronavirus chart. A great example of how to do things right is the widely-circulated Flatten the Curve information graphic/cartoon. […]




visual

Toward Improving the Evaluation of Visual Attention Models: a Crowdsourcing Approach. (arXiv:2002.04407v2 [cs.CV] UPDATED)

Human visual attention is a complex phenomenon. A computational modeling of this phenomenon must take into account where people look in order to evaluate which are the salient locations (spatial distribution of the fixations), when they look in those locations to understand the temporal development of the exploration (temporal order of the fixations), and how they move from one location to another with respect to the dynamics of the scene and the mechanics of the eyes (dynamics). State-of-the-art models focus on learning saliency maps from human data, a process that only takes into account the spatial component of the phenomenon and ignore its temporal and dynamical counterparts. In this work we focus on the evaluation methodology of models of human visual attention. We underline the limits of the current metrics for saliency prediction and scanpath similarity, and we introduce a statistical measure for the evaluation of the dynamics of the simulated eye movements. While deep learning models achieve astonishing performance in saliency prediction, our analysis shows their limitations in capturing the dynamics of the process. We find that unsupervised gravitational models, despite of their simplicity, outperform all competitors. Finally, exploiting a crowd-sourcing platform, we present a study aimed at evaluating how strongly the scanpaths generated with the unsupervised gravitational models appear plausible to naive and expert human observers.




visual

A memory of motion for visual predictive control tasks. (arXiv:2001.11759v3 [cs.RO] UPDATED)

This paper addresses the problem of efficiently achieving visual predictive control tasks. To this end, a memory of motion, containing a set of trajectories built off-line, is used for leveraging precomputation and dealing with difficult visual tasks. Standard regression techniques, such as k-nearest neighbors and Gaussian process regression, are used to query the memory and provide on-line a warm-start and a way point to the control optimization process. The proposed technique allows the control scheme to achieve high performance and, at the same time, keep the computational time limited. Simulation and experimental results, carried out with a 7-axis manipulator, show the effectiveness of the approach.




visual

Semantic Signatures for Large-scale Visual Localization. (arXiv:2005.03388v1 [cs.CV])

Visual localization is a useful alternative to standard localization techniques. It works by utilizing cameras. In a typical scenario, features are extracted from captured images and compared with geo-referenced databases. Location information is then inferred from the matching results. Conventional schemes mainly use low-level visual features. These approaches offer good accuracy but suffer from scalability issues. In order to assist localization in large urban areas, this work explores a different path by utilizing high-level semantic information. It is found that object information in a street view can facilitate localization. A novel descriptor scheme called "semantic signature" is proposed to summarize this information. A semantic signature consists of type and angle information of visible objects at a spatial location. Several metrics and protocols are proposed for signature comparison and retrieval. They illustrate different trade-offs between accuracy and complexity. Extensive simulation results confirm the potential of the proposed scheme in large-scale applications. This paper is an extended version of a conference paper in CBMI'18. A more efficient retrieval protocol is presented with additional experiment results.




visual

Quda: Natural Language Queries for Visual Data Analytics. (arXiv:2005.03257v1 [cs.CL])

Visualization-oriented natural language interfaces (V-NLIs) have been explored and developed in recent years. One challenge faced by V-NLIs is in the formation of effective design decisions that usually requires a deep understanding of user queries. Learning-based approaches have shown potential in V-NLIs and reached state-of-the-art performance in various NLP tasks. However, because of the lack of sufficient training samples that cater to visual data analytics, cutting-edge techniques have rarely been employed to facilitate the development of V-NLIs. We present a new dataset, called Quda, to help V-NLIs understand free-form natural language. Our dataset contains 14;035 diverse user queries annotated with 10 low-level analytic tasks that assist in the deployment of state-of-the-art techniques for parsing complex human language. We achieve this goal by first gathering seed queries with data analysts who are target users of V-NLIs. Then we employ extensive crowd force for paraphrase generation and validation. We demonstrate the usefulness of Quda in building V-NLIs by creating a prototype that makes effective design decisions for free-form user queries. We also show that Quda can be beneficial for a wide range of applications in the visualization community by analyzing the design tasks described in academic publications.




visual

DFSeer: A Visual Analytics Approach to Facilitate Model Selection for Demand Forecasting. (arXiv:2005.03244v1 [cs.HC])

Selecting an appropriate model to forecast product demand is critical to the manufacturing industry. However, due to the data complexity, market uncertainty and users' demanding requirements for the model, it is challenging for demand analysts to select a proper model. Although existing model selection methods can reduce the manual burden to some extent, they often fail to present model performance details on individual products and reveal the potential risk of the selected model. This paper presents DFSeer, an interactive visualization system to conduct reliable model selection for demand forecasting based on the products with similar historical demand. It supports model comparison and selection with different levels of details. Besides, it shows the difference in model performance on similar products to reveal the risk of model selection and increase users' confidence in choosing a forecasting model. Two case studies and interviews with domain experts demonstrate the effectiveness and usability of DFSeer.




visual

Unsupervised Multimodal Neural Machine Translation with Pseudo Visual Pivoting. (arXiv:2005.03119v1 [cs.CL])

Unsupervised machine translation (MT) has recently achieved impressive results with monolingual corpora only. However, it is still challenging to associate source-target sentences in the latent space. As people speak different languages biologically share similar visual systems, the potential of achieving better alignment through visual content is promising yet under-explored in unsupervised multimodal MT (MMT). In this paper, we investigate how to utilize visual content for disambiguation and promoting latent space alignment in unsupervised MMT. Our model employs multimodal back-translation and features pseudo visual pivoting in which we learn a shared multilingual visual-semantic embedding space and incorporate visually-pivoted captioning as additional weak supervision. The experimental results on the widely used Multi30K dataset show that the proposed model significantly improves over the state-of-the-art methods and generalizes well when the images are not available at the testing time.




visual

Make the most of your quarantine while stoned with these visual escapes

You shouldn't find yourself rewatching some sitcom for the thousandth time or sitting through a vacuous Hollywood blockbuster just because you're stoned and stuck inside during the age of social distancing.…



  • News/Green Zone

visual

Method for generating visual mapping of knowledge information from parsing of text inputs for subjects and predicates

A method for performing relational analysis of parsed input is employed to create a visual map of knowledge information. A title, header or subject line for an input item of information is parsed into syntactical components of at least a subject component and any predicate component(s) relationally linked as topic and subtopics. A search of topics and subtopics is carried out for each parsed component. If a match is found, then the parsed component is taken as a chosen topic/subtopic label. If no match is found, then the parsed component is formatted as a new entry in the knowledge map. A translation function for translating topics and subtopics from an original language into one or more target languages is enabled by user request or indicated user preference for display on a generated visual map of knowledge information.




visual

Multi-lane time-synched visualizations of machine data events

A visualization can include a set of swim lanes, each swim lane representing information about an event type. An event type can be specified, e.g., as those events having certain keywords and/or having specified value(s) for specified field(s). The swim lane can plot when (within a time range) events of the associated event type occurred. Specifically, each such event can be assigned to a bucket having a bucket time matching the event time. A swim lane can extend along a timeline axis in the visualization, and the buckets can be positioned at a point along the axis that represents the bucket time. Thus, the visualization may indicate whether events were clustered at a point in time. Because the visualization can include a plurality of swim lanes, the visualization can further indicate how timing of events of a first type compare to timing of events of a second type.




visual

Representation of overlapping visual entities

Various embodiments present a combined visual entity that represents overlapping visual entities. The combined visual entity can include a primary visualization that represents one of the overlapping visual entities and annotations that represent others of the overlapping visual entities. For example, a map view can include multiple geographical entities that overlap. A primary visualization can be rendered that represents one of the multiple geographical entities. The primary visualization can be visually annotated (e.g., with symbols, letters, or other visual indicators) to indicate others of the multiple geographical entities. In some embodiments, a zoom operation can cause visual entities to be added and/or removed from the combined visual entity.




visual

Visualization techniques for imprecise statement completion

When a user enters text into an application, the application can utilize an auto-complete feature to provide the user with estimations as to a complete term a user is attempting to enter into the application. Visualization can be provided along with an estimation to disclose the likelihood the estimation is what the user intends to enter. Furthermore, a rationale can be provided to the user for the reason an estimation was provided to the user.




visual

Methods, devices, and computer program products for providing instant messaging in conjunction with an audiovisual, video, or audio program

Methods, devices, and computer program products for providing instant messaging in conjunction with an audiovisual, video, or audio program are provided. The methods include providing an audiovisual, video, or audio program to a user. Viewer/listener input is received requesting activation of a program-based instant messaging function. A viewer/listener identifier corresponding to the viewer/listener is associated with a program identifier that uniquely identifies the audiovisual, video, or audio program being provided to the user to thereby generate a program viewer/listener record. The program viewer/listener record is transmitted to an electronic database. A list of other users who are viewing or listening to the program in addition to the viewer/listener is acquired from the electronic database. The list of other users is transmitted to the viewer/listener.




visual

Device and method for obfuscating visual information

A device is described for the hiding and subsequent recovery of visual information. The device comprises two or more tokens (1), each containing a mask (2,3) of coloured pixels (4), are overlaid (5), so that when the pixels are aligned, hidden information, invisible in the individual tokens. The hidden information consists of one or more recognisable alphabetic, numerical or pictorial characters (6). During token overlay and alignment, the information becomes recognisable because it is made up of pixels whose colour is differentiated from the other pixels in the overlay. The information is hidden by adding pixels of certain colours. When the tokens are overlaid and the pixels aligned, the added pixels are effectively subtracted, revealing the hidden information. The tokens may be printed on various media, or may be displayed on an electronic device.




visual

Code execution in complex audiovisual experiences

In one embodiment, a method includes obtaining a link to a video program; obtaining metadata that relates to the program and that defines, for a specified time point in the program, annotations to be invoked at the specified time point; wherein the annotations comprise: a graphic image; one or more filters, each of the filters comprising a key and one or more matching values; and optionally a reference to a video segment, an electronic document, program code statements, or a programmatic call; during playing the video, detecting that the video program is playing at the specified time point; in response to the detecting: for each particular annotation for the specified time point, retrieving a current value for the key, and causing to display the graphic image associated with that particular annotation only when the current value of the key matches one of the matching values of one of the filters.




visual

Intelligent board game system with visual marker based game object tracking and identification

A board game system comprises one or more game objects, a processing device, a memory device and one or more cameras. Each of the game objects comprise a unique visual marker positioned on a top surface of the game object, wherein the unique visual marker comprises a series of concentric rings that represent data that uniquely identifies the game object. As a result, during the course of game play, the location and identification of the game objects are able to be determined by the processing device by analyzing images captured by the one or more cameras of the visual markers of the game objects on the game board. The processing device is able to compare the data of the visual markers to a table stored in the memory device that associates the data with a specific game object.




visual

Designer-adaptive visual codes

A designer-adaptive visual code (32, 32', 32″) includes a user-selected set glyphs (36, 36', 36″, 36'−), a user-selected set of allowable glyph orientations relative to a user-selected reference angle, and a user-selected spatial arrangement of the glyphs (36, 36', 36″, 36'″). The user-selected set of glyphs (36, 36', 36″, 36'″) has a size sufficient to recover geometric characteristics of at least one repeating code portion so as to generate an analyzable image when captured via a camera-equipped mobile device (26). The user-selected spatial arrangement of the glyphs (36, 36', 36″, 36'″) includes the at least one repeating code portion (34) to be visible on a surface from at least two different areas of the surface.




visual

Methods and systems for optimizing visual data communication

A system and method for transmitting visual data by displaying a synchronization video that includes synchronization code sequences on a first device, capturing the synchronization video using a video camera of a second device, parsing and decoding the synchronization code sequences on the second device, displaying an indication of which of the synchronization code sequences are compatible for visual data transmission on the second device, receiving a selected synchronization code sequence of the synchronization code sequences on the first device, and displaying a data code sequence corresponding to the selected synchronization code sequence on the first device, wherein the data code sequence includes encoded data, and capturing and decoding the data code sequence on the second device.




visual

Portable RFID reading terminal with visual indication of scan trace

A portable radio-frequency identifier (RFID) reading terminal can comprise a microprocessor, a memory, an RFID reading device, and a display. The portable RFID reading terminal can be configured to display a scan trace provided by a line comprising a plurality of time varying points. Each point can be defined by a projection of a radio frequency (RF) signal coverage shape of the RFID reading device onto a chosen plane at a given moment in time.




visual

Method and system for quantitative assessment of visual motor response

A method and system are presented to address quantitative assessment of visual motor response in a subject, where the method comprises the steps of: (1) presenting at least one scene to a subject on a display; (2) modulating the contrast of a predetermined section of the scene; (3) moving the predetermined section relative to the scene with the movement being tracked by the subject via at least one input device; (4) measuring a kinematic parameter of the tracked movement; (5) quantitatively refining the tracked movement; (6) determining the relationship between at least one of the scene and the quantitatively refined tracked movement; (7) adjusting the modulated contrast relative to the quantitatively refined tracked movement; (8) calculating a critical threshold parameter for the subject; and (9) recording a critical threshold parameter onto a tangible computer readable medium.




visual

Apparatus and method for implementing safe visual information provision

The invention relates to an apparatus and method which allows information representing a state or condition or an action to be performed as part of a control system to be present to one or more users. The information is selected and generated in a manner which removes or at least reduces the risk of potentially catastrophic error occurring which would be possible if, for example, the information is corrupt or lost during subsequent transmission, remote processing and/or displaying. One such use of the apparatus and method of the invention is in relation to transport vehicles and the control of the movement of said vehicles along predefined geographical paths.




visual

Visual system for programming of simultaneous and synchronous machining operations on lathes

A system and method allows visual programming of simultaneous and synchronous machining operations on multi-axis lathes. The system and method accounts for different combinations of simultaneous and synchronized lathe operations on the spindles which can utilize multiple tools. A graphic synchronization icon is assigned to each mode that preferably represents the lathe operation. Appropriate synchronous operations are grouped together in synchronization groups. The system and method are universal since a postprocessor processes the synchronization modes and synchronization groups, and translates them for use with computer programs understood by a particular CNC lathe.




visual

SYSTEMS AND METHODS FOR ANALYZING ELECTRONIC COMMUNICATIONS TO DYNAMICALLY IMPROVE EFFICIENCY AND VISUALIZATION OF COLLABORATIVE WORK ENVIRONMENTS

Systems and methods for managing a collaborative environment are provided. A plurality of sheets is stored in a collaboration system. The collaboration system tracks user interactions with the plurality of sheets and generates a collaboration graph based on the interactions. The collaboration graph is analyzed to determine similarities between the sheets and/or the users. One or more visualizations are generated based on the collaboration graph and the determined similarities. In some embodiments, the collaboration system is able to provide project management information even for dynamic workflows that are not explicitly defined.




visual

UNIVERSAL ADAPTOR FOR RAPID DEVELOPMENT OF WEB-BASED DATA VISUALIZATIONS

A method of web-based data visualization includes: a Frontend sending a request over a computer network to a server configured as a Backend; a web server of the second server fetching data responsive to the request; the web server sending a response to the Frontend in a format compatible with a plurality software adaptors located on the Frontend, the response including information about objects to be presented on a web component; logic of the Frontend passing the response to a selected one of the software adaptors; and the selected software adaptor rendering the using a web visualization library associated with selected software adaptor.




visual

AGGREGATING DATA FOR VISUALIZATION

Data items are aggregated based on information relating to a display capability of a display device, to produce aggregated data. The aggregated data is for display in a visualization presented by the display device. Responsive to user selection in the visualization, dynamically created data at a second hierarchical level different from a first hierarchical level of the aggregated data is for display in the visualization.




visual

REPRESENTATION OF OVERLAPPING VISUAL ENTITIES

Various embodiments present a combined visual entity that represents overlapping visual entities. The combined visual entity can include a primary visualization that represents one of the overlapping visual entities and annotations that represent others of the overlapping visual entities. For example, a map view can include multiple geographical entities that overlap. A primary visualization can be rendered that represents one of the multiple geographical entities. The primary visualization can be visually annotated (e.g., with symbols, letters, or other visual indicators) to indicate others of the multiple geographical entities. In some embodiments, a zoom operation can cause visual entities to be added and/or removed from the combined visual entity.




visual

Multi-Layer Rendering for Visualizations

Some embodiments provide a non-transitory machine-readable medium that stores a program executable by at least one processing unit of a device. The program receives data associated with a visual presentation that includes several visual elements. The program also identifies a first set of visual elements in the several visual elements having a first type and a second set of visual elements in the several visual elements having a second type. The program further renders the first set of visual elements in a first layer of the visual presentation using a first rendering engine. The program also renders the second set of visual elements in a second layer of the visual presentation using a second rendering engine.




visual

METHOD FOR GRAPHICALLY REPRESENTING A SYNTHETIC THREE-DIMENSIONAL VIEW OF THE EXTERIOR LANDSCAPE IN AN ON-BOARD VISUALISATION SYSTEM FOR AIRCRAFT

The general field of the invention is that of the graphical representation of a synthetic three dimensional view of the exterior landscape in an onboard visualisation system for aircraft, said graphical representation being displayed on a visualisation screen comprising the piloting and navigation information of said aircraft superposed onto said three-dimensional synthetic representation of the exterior landscape, said synthetic representation being computed up to a first determined distance, characterised in that said three-dimensional synthetic representation is tilted at a tilt angle about an axiom positioned at the level of the terrain in a substantially horizontal plane, and substantially perpendicularly to an axis between the flight direction and the heading of the aircraft, said axis moving with the aircraft.




visual

METHOD, APPARATUS, AND COMPUTER-READABLE MEDIUM FOR VISUALIZING RELATIONSHIPS BETWEEN PAIRS OF COLUMNS

An apparatus, computer-readable medium, and computer-implemented method for visualizing relationships between pairs of columns, comprising identifying a relationship classification corresponding to two columns in a plurality of columns based on a data type of each column in the two columns, applying one or more statistical measures to data in the two columns to generate association data quantifying a plurality of relationships between data values in a first column of the two columns and data values in a second column of the two columns, wherein the one or more statistical measures are determined based at least in part on the relationship classification, and transforming the association data into a visualization, wherein the visualization comprises one or more indicators corresponding to one or more relationships in the plurality of relationships and wherein a layout of the visualization is determined based on the relationship classification.




visual

SYSTEM AND METHOD FOR PROFILING A USER BASED ON VISUAL CONTENT

A system and method for profiling a user may generate abstract data based on features identified in visual content, the visual content stored in a computing device of a user, and may generate a profile of the user based on the abstract data and based on metadata related to the visual content. A profile may be used to perform at least one of: generating a prediction related to a behavior of the user and selecting content to be presented to the user.




visual

INTUITIVE MUSIC VISUALIZATION USING EFFICIENT STRUCTURAL SEGMENTATION

Embodiments of the present invention relate to automatically identifying structures of a music stream. A segment structure may be generated that visually indicates repeating segments of a music stream. To generate a segment structure, a feature that corresponds to a music attribute from a waveform corresponding to the music stream is extracted from a waveform, such as an input signal. Utilizing a signal segmentation algorithm, such as a Variable Markov Oracle (VMO) algorithm, a symbolized signal, such as a VMO structure, is generated. From the symbolized signal, a matrix is generated. The matrix may be, for instance, a VMO-SSM. A segment structure is then generated from the matrix. The segment structure illustrates a segmentation of the music stream and the segments that are repetitive.




visual

Visual Instruction for Marching Band

AUTHOR: Rudy Ruiz | DATES: May 20, May 27, June 3 | TIME: 6 pm each day In this three-part training, Rudy Ruiz, addresses the art of quality visual instruction. From fundamental principles, to teaching strategies, to finding a teaching gig, Rudy addresses all aspects of this topic over three one-hour webinars.




visual

Notch, endlessly parameterized visual tool, explained and reviewed for mere mortals

Imagine the powers of motion effects - but with the ability to control all of them, parameter by parameter, and use assets dynamically without only rendering video. From artists and VJs to big events, that's significant. CDM's Ted Pallas breaks down Notch in a review for the real world. -Ed.

The post Notch, endlessly parameterized visual tool, explained and reviewed for mere mortals appeared first on CDM Create Digital Music.




visual

Cleansing, processing, and visualizing a data set, Part 1: Working with messy data

Discover common problems associated with cleansing data for validation and processing, with solutions for dealing with them. You'll also find a custom tool to make the process of cleansing data and merging data sets for analysis.




visual

Cleansing, processing, and visualizing a data set, Part 2: Gaining invaluable insight from clean data sets

Learn about VQ and ART algorithms. VQ quickly and efficiently clusters a data set; ART adapts the number of clusters based on the data set.




visual

Visual arts: 2011 Archibald Prize Exhibition at the Tweed River Art Gallery, Murwillumbah

ABC North Coast resident arts reviewer, Jeanti St Clair looks at the latest music and theatre to hit the region.





visual

Eric Koo's visual diary of the Gold Coast

The familiar and nostalgic, philosophical and witty candour are all alive in this documentation of Gold Coast beaches



  • ABC Local
  • goldcoast
  • Arts and Entertainment:All:All
  • Arts and Entertainment:Books (Literature):All
  • Arts and Entertainment:Visual Art:All
  • Arts and Entertainment:Photography:All
  • Australia:QLD:Burleigh Heads 4220


visual

Surfing masterclass for children with visual impairment gives a sense of freedom

Visually impaired children at a surfing event in the NSW Hunter region are encouraged to get into sports and not let their disability hold them back.




visual

Watch Issa Rae Play a Gun-Wielding Stripper Opposite Danny Trejo in D Smoke’s ‘Lights On’ Visual (EXCLUSIVE)

D Smoke first wowed the music world when he won the inaugural season of "Rhythm + Flow," Netflix's hip-hop  competition show with Cardi B, Chance the Rapper and T.I. as judges. Impressing both the panel and viewers at home with meaningful, uplifting verses, the Inglewood, Calif. native, whose real name is Daniel Farris, dropped his […]





visual

Making small visual displays accessible to people wih vision loss. AFB to develop consumer report on small screen access.

The ability to read small visual displays (SVDs) affects successful functioning at home and in the workplace. SVDs can be found in products as diverse as cell phones, personal digital assistants, photocopiers, fax machines, kitchen and laundry appliances, home entertainment devices, exercise equipment, and diabetes self-management technology. Individuals with vision loss face severe limitations in using such products safely and effectively because the visual displays lack accessibility features.




visual

Floating Point Visually Explained




visual

NEWS: The Starfighter Visual Novel Kickstarter!

This is the big surprise project I've been working on! The Starfighter visual novel is going to be awesome-- but I need your help to make it happen!

Check out the Kickstarter page here! There's lots of rewards, new art, and a description of the important stuff: how the game will work, what it'll be about, MY HOPES AND DREAMS, etc! (You can even play a little mini demo to give you an idea of what it will be like!) There is even a really sweet video Thisbe made with slick logo animation and you can hear my tiny wraith voice!

It's going to be really rad and I'm so excited about this--!

LET'S MAKE IT HAPPEN TOGETHER- visit the Kickstarter here! -Hamlet




visual

Visualizing Emotions

Sociologists studying emotion have opened up the inner, private feelings of anger, fear, shame, and love to reveal the far-reaching effects of social forces on our most personal experiences. This subfield has given us new words to make sense of shared experiences: emotional labor in our professional lives, collective effervescence at sporting events and concerts, […]




visual

Visual Music

It’s Cinco De Mayo! Stay safe! HI! PLEASE FOLLOW @LAMEBOOK ON INSTAGRAM! THANK YOU!!




visual

Re: Visualization shows droplets from one cough on an airplane




visual

The 2020 Oscar nominees for visual effects: Playing with ages, time and reality

"The Irishman," "1917," "The Lion King," "Star Wars: The Rise of Skywalker," "Avengers: Endgame" — a rundown of the visual-effects Oscar finalists.




visual

World War I adventure '1917' wins visual effects Oscar

"1917," Sam Mendes' World War I adventure tops fellow best-picture nominee "The Irishman" along with three flashier contenders: "The Lion King," "Star Wars: The Rise of Skywalker," "Avengers: Endgame."