image

Warwick Image Forensics Dataset for Device Fingerprinting In Multimedia Forensics. (arXiv:2004.10469v2 [cs.CV] UPDATED)

Device fingerprints like sensor pattern noise (SPN) are widely used for provenance analysis and image authentication. Over the past few years, the rapid advancement in digital photography has greatly reshaped the pipeline of image capturing process on consumer-level mobile devices. The flexibility of camera parameter settings and the emergence of multi-frame photography algorithms, especially high dynamic range (HDR) imaging, bring new challenges to device fingerprinting. The subsequent study on these topics requires a new purposefully built image dataset. In this paper, we present the Warwick Image Forensics Dataset, an image dataset of more than 58,600 images captured using 14 digital cameras with various exposure settings. Special attention to the exposure settings allows the images to be adopted by different multi-frame computational photography algorithms and for subsequent device fingerprinting. The dataset is released as an open-source, free for use for the digital forensic community.




image

Decoding EEG Rhythms During Action Observation, Motor Imagery, and Execution for Standing and Sitting. (arXiv:2004.04107v2 [cs.HC] UPDATED)

Event-related desynchronization and synchronization (ERD/S) and movement-related cortical potential (MRCP) play an important role in brain-computer interfaces (BCI) for lower limb rehabilitation, particularly in standing and sitting. However, little is known about the differences in the cortical activation between standing and sitting, especially how the brain's intention modulates the pre-movement sensorimotor rhythm as they do for switching movements. In this study, we aim to investigate the decoding of continuous EEG rhythms during action observation (AO), motor imagery (MI), and motor execution (ME) for standing and sitting. We developed a behavioral task in which participants were instructed to perform both AO and MI/ME in regard to the actions of sit-to-stand and stand-to-sit. Our results demonstrated that the ERD was prominent during AO, whereas ERS was typical during MI at the alpha band across the sensorimotor area. A combination of the filter bank common spatial pattern (FBCSP) and support vector machine (SVM) for classification was used for both offline and pseudo-online analyses. The offline analysis indicated the classification of AO and MI providing the highest mean accuracy at 82.73$pm$2.38\% in stand-to-sit transition. By applying the pseudo-online analysis, we demonstrated the higher performance of decoding neural intentions from the MI paradigm in comparison to the ME paradigm. These observations led us to the promising aspect of using our developed tasks based on the integration of both AO and MI to build future exoskeleton-based rehabilitation systems.




image

Hierarchical Neural Architecture Search for Single Image Super-Resolution. (arXiv:2003.04619v2 [cs.CV] UPDATED)

Deep neural networks have exhibited promising performance in image super-resolution (SR). Most SR models follow a hierarchical architecture that contains both the cell-level design of computational blocks and the network-level design of the positions of upsampling blocks. However, designing SR models heavily relies on human expertise and is very labor-intensive. More critically, these SR models often contain a huge number of parameters and may not meet the requirements of computation resources in real-world applications. To address the above issues, we propose a Hierarchical Neural Architecture Search (HNAS) method to automatically design promising architectures with different requirements of computation cost. To this end, we design a hierarchical SR search space and propose a hierarchical controller for architecture search. Such a hierarchical controller is able to simultaneously find promising cell-level blocks and network-level positions of upsampling layers. Moreover, to design compact architectures with promising performance, we build a joint reward by considering both the performance and computation cost to guide the search process. Extensive experiments on five benchmark datasets demonstrate the superiority of our method over existing methods.




image

SCAttNet: Semantic Segmentation Network with Spatial and Channel Attention Mechanism for High-Resolution Remote Sensing Images. (arXiv:1912.09121v2 [cs.CV] UPDATED)

High-resolution remote sensing images (HRRSIs) contain substantial ground object information, such as texture, shape, and spatial location. Semantic segmentation, which is an important task for element extraction, has been widely used in processing mass HRRSIs. However, HRRSIs often exhibit large intraclass variance and small interclass variance due to the diversity and complexity of ground objects, thereby bringing great challenges to a semantic segmentation task. In this paper, we propose a new end-to-end semantic segmentation network, which integrates lightweight spatial and channel attention modules that can refine features adaptively. We compare our method with several classic methods on the ISPRS Vaihingen and Potsdam datasets. Experimental results show that our method can achieve better semantic segmentation results. The source codes are available at https://github.com/lehaifeng/SCAttNet.




image

IPG-Net: Image Pyramid Guidance Network for Small Object Detection. (arXiv:1912.00632v3 [cs.CV] UPDATED)

For Convolutional Neural Network-based object detection, there is a typical dilemma: the spatial information is well kept in the shallow layers which unfortunately do not have enough semantic information, while the deep layers have a high semantic concept but lost a lot of spatial information, resulting in serious information imbalance. To acquire enough semantic information for shallow layers, Feature Pyramid Networks (FPN) is used to build a top-down propagated path. In this paper, except for top-down combining of information for shallow layers, we propose a novel network called Image Pyramid Guidance Network (IPG-Net) to make sure both the spatial information and semantic information are abundant for each layer. Our IPG-Net has two main parts: the image pyramid guidance transformation module and the image pyramid guidance fusion module. Our main idea is to introduce the image pyramid guidance into the backbone stream to solve the information imbalance problem, which alleviates the vanishment of the small object features. This IPG transformation module promises even in the deepest stage of the backbone, there is enough spatial information for bounding box regression and classification. Furthermore, we designed an effective fusion module to fuse the features from the image pyramid and features from the backbone stream. We have tried to apply this novel network to both one-stage and two-stage detection models, state of the art results are obtained on the most popular benchmark data sets, i.e. MS COCO and Pascal VOC.




image

Biologic and Prognostic Feature Scores from Whole-Slide Histology Images Using Deep Learning. (arXiv:1910.09100v4 [q-bio.QM] UPDATED)

Histopathology is a reflection of the molecular changes and provides prognostic phenotypes representing the disease progression. In this study, we introduced feature scores generated from hematoxylin and eosin histology images based on deep learning (DL) models developed for prostate pathology. We demonstrated that these feature scores were significantly prognostic for time to event endpoints (biochemical recurrence and cancer-specific survival) and had simultaneously molecular biologic associations to relevant genomic alterations and molecular subtypes using already trained DL models that were not previously exposed to the datasets of the current study. Further, we discussed the potential of such feature scores to improve the current tumor grading system and the challenges that are associated with tumor heterogeneity and the development of prognostic models from histology images. Our findings uncover the potential of feature scores from histology images as digital biomarkers in precision medicine and as an expanding utility for digital pathology.




image

NH-HAZE: An Image Dehazing Benchmark with Non-Homogeneous Hazy and Haze-Free Images. (arXiv:2005.03560v1 [cs.CV])

Image dehazing is an ill-posed problem that has been extensively studied in the recent years. The objective performance evaluation of the dehazing methods is one of the major obstacles due to the lacking of a reference dataset. While the synthetic datasets have shown important limitations, the few realistic datasets introduced recently assume homogeneous haze over the entire scene. Since in many real cases haze is not uniformly distributed we introduce NH-HAZE, a non-homogeneous realistic dataset with pairs of real hazy and corresponding haze-free images. This is the first non-homogeneous image dehazing dataset and contains 55 outdoor scenes. The non-homogeneous haze has been introduced in the scene using a professional haze generator that imitates the real conditions of hazy scenes. Additionally, this work presents an objective assessment of several state-of-the-art single image dehazing methods that were evaluated using NH-HAZE dataset.




image

How Can CNNs Use Image Position for Segmentation?. (arXiv:2005.03463v1 [eess.IV])

Convolution is an equivariant operation, and image position does not affect its result. A recent study shows that the zero-padding employed in convolutional layers of CNNs provides position information to the CNNs. The study further claims that the position information enables accurate inference for several tasks, such as object recognition, segmentation, etc. However, there is a technical issue with the design of the experiments of the study, and thus the correctness of the claim is yet to be verified. Moreover, the absolute image position may not be essential for the segmentation of natural images, in which target objects will appear at any image position. In this study, we investigate how positional information is and can be utilized for segmentation tasks. Toward this end, we consider {em positional encoding} (PE) that adds channels embedding image position to the input images and compare PE with several padding methods. Considering the above nature of natural images, we choose medical image segmentation tasks, in which the absolute position appears to be relatively important, as the same organs (of different patients) are captured in similar sizes and positions. We draw a mixed conclusion from the experimental results; the positional encoding certainly works in some cases, but the absolute image position may not be so important for segmentation tasks as we think.




image

NTIRE 2020 Challenge on Spectral Reconstruction from an RGB Image. (arXiv:2005.03412v1 [eess.IV])

This paper reviews the second challenge on spectral reconstruction from RGB images, i.e., the recovery of whole-scene hyperspectral (HS) information from a 3-channel RGB image. As in the previous challenge, two tracks were provided: (i) a "Clean" track where HS images are estimated from noise-free RGBs, the RGB images are themselves calculated numerically using the ground-truth HS images and supplied spectral sensitivity functions (ii) a "Real World" track, simulating capture by an uncalibrated and unknown camera, where the HS images are recovered from noisy JPEG-compressed RGB images. A new, larger-than-ever, natural hyperspectral image data set is presented, containing a total of 510 HS images. The Clean and Real World tracks had 103 and 78 registered participants respectively, with 14 teams competing in the final testing phase. A description of the proposed methods, alongside their challenge scores and an extensive evaluation of top performing methods is also provided. They gauge the state-of-the-art in spectral reconstruction from an RGB image.




image

Scene Text Image Super-Resolution in the Wild. (arXiv:2005.03341v1 [cs.CV])

Low-resolution text images are often seen in natural scenes such as documents captured by mobile phones. Recognizing low-resolution text images is challenging because they lose detailed content information, leading to poor recognition accuracy. An intuitive solution is to introduce super-resolution (SR) techniques as pre-processing. However, previous single image super-resolution (SISR) methods are trained on synthetic low-resolution images (e.g.Bicubic down-sampling), which is simple and not suitable for real low-resolution text recognition. To this end, we pro-pose a real scene text SR dataset, termed TextZoom. It contains paired real low-resolution and high-resolution images which are captured by cameras with different focal length in the wild. It is more authentic and challenging than synthetic data, as shown in Fig. 1. We argue improv-ing the recognition accuracy is the ultimate goal for Scene Text SR. In this purpose, a new Text Super-Resolution Network termed TSRN, with three novel modules is developed. (1) A sequential residual block is proposed to extract the sequential information of the text images. (2) A boundary-aware loss is designed to sharpen the character boundaries. (3) A central alignment module is proposed to relieve the misalignment problem in TextZoom. Extensive experiments on TextZoom demonstrate that our TSRN largely improves the recognition accuracy by over 13%of CRNN, and by nearly 9.0% of ASTER and MORAN compared to synthetic SR data. Furthermore, our TSRN clearly outperforms 7 state-of-the-art SR methods in boosting the recognition accuracy of LR images in TextZoom. For example, it outperforms LapSRN by over 5% and 8%on the recognition accuracy of ASTER and CRNN. Our results suggest that low-resolution text recognition in the wild is far from being solved, thus more research effort is needed.




image

Wavelet Integrated CNNs for Noise-Robust Image Classification. (arXiv:2005.03337v1 [cs.CV])

Convolutional Neural Networks (CNNs) are generally prone to noise interruptions, i.e., small image noise can cause drastic changes in the output. To suppress the noise effect to the final predication, we enhance CNNs by replacing max-pooling, strided-convolution, and average-pooling with Discrete Wavelet Transform (DWT). We present general DWT and Inverse DWT (IDWT) layers applicable to various wavelets like Haar, Daubechies, and Cohen, etc., and design wavelet integrated CNNs (WaveCNets) using these layers for image classification. In WaveCNets, feature maps are decomposed into the low-frequency and high-frequency components during the down-sampling. The low-frequency component stores main information including the basic object structures, which is transmitted into the subsequent layers to extract robust high-level features. The high-frequency components, containing most of the data noise, are dropped during inference to improve the noise-robustness of the WaveCNets. Our experimental results on ImageNet and ImageNet-C (the noisy version of ImageNet) show that WaveCNets, the wavelet integrated versions of VGG, ResNets, and DenseNet, achieve higher accuracy and better noise-robustness than their vanilla versions.




image

Interval type-2 fuzzy logic system based similarity evaluation for image steganography. (arXiv:2005.03310v1 [cs.MM])

Similarity measure, also called information measure, is a concept used to distinguish different objects. It has been studied from different contexts by employing mathematical, psychological, and fuzzy approaches. Image steganography is the art of hiding secret data into an image in such a way that it cannot be detected by an intruder. In image steganography, hiding secret data in the plain or non-edge regions of the image is significant due to the high similarity and redundancy of the pixels in their neighborhood. However, the similarity measure of the neighboring pixels, i.e., their proximity in color space, is perceptual rather than mathematical. This paper proposes an interval type 2 fuzzy logic system (IT2 FLS) to determine the similarity between the neighboring pixels by involving an instinctive human perception through a rule-based approach. The pixels of the image having high similarity values, calculated using the proposed IT2 FLS similarity measure, are selected for embedding via the least significant bit (LSB) method. We term the proposed procedure of steganography as IT2 FLS LSB method. Moreover, we have developed two more methods, namely, type 1 fuzzy logic system based least significant bits (T1FLS LSB) and Euclidean distance based similarity measures for least significant bit (SM LSB) steganographic methods. Experimental simulations were conducted for a collection of images and quality index metrics, such as PSNR, UQI, and SSIM are used. All the three steganographic methods are applied on datasets and the quality metrics are calculated. The obtained stego images and results are shown and thoroughly compared to determine the efficacy of the IT2 FLS LSB method. Finally, we have done a comparative analysis of the proposed approach with the existing well-known steganographic methods to show the effectiveness of our proposed steganographic method.




image

NTIRE 2020 Challenge on Image Demoireing: Methods and Results. (arXiv:2005.03155v1 [cs.CV])

This paper reviews the Challenge on Image Demoireing that was part of the New Trends in Image Restoration and Enhancement (NTIRE) workshop, held in conjunction with CVPR 2020. Demoireing is a difficult task of removing moire patterns from an image to reveal an underlying clean image. The challenge was divided into two tracks. Track 1 targeted the single image demoireing problem, which seeks to remove moire patterns from a single image. Track 2 focused on the burst demoireing problem, where a set of degraded moire images of the same scene were provided as input, with the goal of producing a single demoired image as output. The methods were ranked in terms of their fidelity, measured using the peak signal-to-noise ratio (PSNR) between the ground truth clean images and the restored images produced by the participants' methods. The tracks had 142 and 99 registered participants, respectively, with a total of 14 and 6 submissions in the final testing stage. The entries span the current state-of-the-art in image and burst image demoireing problems.




image

Deep Learning for Image-based Automatic Dial Meter Reading: Dataset and Baselines. (arXiv:2005.03106v1 [cs.CV])

Smart meters enable remote and automatic electricity, water and gas consumption reading and are being widely deployed in developed countries. Nonetheless, there is still a huge number of non-smart meters in operation. Image-based Automatic Meter Reading (AMR) focuses on dealing with this type of meter readings. We estimate that the Energy Company of Paran'a (Copel), in Brazil, performs more than 850,000 readings of dial meters per month. Those meters are the focus of this work. Our main contributions are: (i) a public real-world dial meter dataset (shared upon request) called UFPR-ADMR; (ii) a deep learning-based recognition baseline on the proposed dataset; and (iii) a detailed error analysis of the main issues present in AMR for dial meters. To the best of our knowledge, this is the first work to introduce deep learning approaches to multi-dial meter reading, and perform experiments on unconstrained images. We achieved a 100.0% F1-score on the dial detection stage with both Faster R-CNN and YOLO, while the recognition rates reached 93.6% for dials and 75.25% for meters using Faster R-CNN (ResNext-101).




image

Line Artefact Quantification in Lung Ultrasound Images of COVID-19 Patients via Non-Convex Regularisation. (arXiv:2005.03080v1 [eess.IV])

In this paper, we present a novel method for line artefacts quantification in lung ultrasound (LUS) images of COVID-19 patients. We formulate this as a non-convex regularisation problem involving a sparsity-enforcing, Cauchy-based penalty function, and the inverse Radon transform. We employ a simple local maxima detection technique in the Radon transform domain, associated with known clinical definitions of line artefacts. Despite being non-convex, the proposed method has guaranteed convergence via a proximal splitting algorithm and accurately identifies both horizontal and vertical line artefacts in LUS images. In order to reduce the number of false and missed detection, our method includes a two-stage validation mechanism, which is performed in both Radon and image domains. We evaluate the performance of the proposed method in comparison to the current state-of-the-art B-line identification method and show a considerable performance gain with 87% correctly detected B-lines in LUS images of nine COVID-19 patients. In addition, owing to its fast convergence, which takes around 12 seconds for a given frame, our proposed method is readily applicable for processing LUS image sequences.




image

CovidCTNet: An Open-Source Deep Learning Approach to Identify Covid-19 Using CT Image. (arXiv:2005.03059v1 [eess.IV])

Coronavirus disease 2019 (Covid-19) is highly contagious with limited treatment options. Early and accurate diagnosis of Covid-19 is crucial in reducing the spread of the disease and its accompanied mortality. Currently, detection by reverse transcriptase polymerase chain reaction (RT-PCR) is the gold standard of outpatient and inpatient detection of Covid-19. RT-PCR is a rapid method, however, its accuracy in detection is only ~70-75%. Another approved strategy is computed tomography (CT) imaging. CT imaging has a much higher sensitivity of ~80-98%, but similar accuracy of 70%. To enhance the accuracy of CT imaging detection, we developed an open-source set of algorithms called CovidCTNet that successfully differentiates Covid-19 from community-acquired pneumonia (CAP) and other lung diseases. CovidCTNet increases the accuracy of CT imaging detection to 90% compared to radiologists (70%). The model is designed to work with heterogeneous and small sample sizes independent of the CT imaging hardware. In order to facilitate the detection of Covid-19 globally and assist radiologists and physicians in the screening process, we are releasing all algorithms and parametric details in an open-source format. Open-source sharing of our CovidCTNet enables developers to rapidly improve and optimize services, while preserving user privacy and data ownership.





image

Parsing and rendering structured images

Systems and methods for generating a tuple of structured data files are described herein. In one example, a method includes detecting an expression that describes a structure of a structured image using a constructor. The method can also include using an inference-rule based search strategy to identify a hierarchical arrangement of bounding boxes in the structured image that match the expression. Furthermore, the method can include generating a first tuple of structured data files based on the identified hierarchical arrangement of bounding boxes in the structured image.




image

Identifying particular images from a collection

A method of identifying one or more particular images from an image collection, includes indexing the image collection to provide image descriptors for each image in the image collection such that each image is described by one or more of the image descriptors; receiving a query from a user specifying at least one keyword for an image search; and using the keyword(s) to search a second collection of tagged images to identify co-occurrence keywords. The method further includes using the identified co-occurrence keywords to provide an expanded list of keywords; using the expanded list of keywords to search the image descriptors to identify a set of candidate images satisfying the keywords; grouping the set of candidate images according to at least one of the image descriptors, and selecting one or more representative images from each grouping; and displaying the representative images to the user.




image

Active ray-curable inkjet ink, and image formation method

The active ray-curable inkjet ink comprises a gelling agent, photopolymerizable compounds and a photoinitiator, and reversibly transitions into a sol-gel phase according to the temperature. Therein: (1) a (meth)acrylate compound having a molecular weight of 300-1,500 and having 3-14 (—CH2—CH2—O—) structural units within a molecule is included as the first photopolymerizable compound at a proportion of 30-70 mass % relative to the total mass of the ink; (2) a (meth)acrylate compound having a molecular weight of 300-1,500 and a C log P value of 4.0-7.0 is included as the second photopolymerizable compound at a proportion of 10-40 mass % relative to the total mass of the ink; and (3) the gelling agent has a total of at least 12 carbon atoms, and has a straight or branched alkyl chain including at least three carbon atoms.




image

System, method and program product for cost-aware selection of stored virtual machine images for subsequent use

A system, method and computer program product for allocating shared resources. Upon receiving requests for resources, the cost of bundling software in a virtual machine (VM) image is automatically generated. Software is selected by the cost for each bundle according to the time required to install it where required, offset by the time to uninstall it where not required. A number of VM images having the highest software bundle value (i.e., highest cost bundled) is selected and stored, e.g., in a machine image store. With subsequent requests for resources, VMs may be instantiated from one or more stored VM images and, further, stored images may be updated selectively updated with new images.




image

Image processing apparatus and control method thereof and image processing system

An image processing apparatus including: image processor which processes broadcasting signal, to display image based on processed broadcasting signal; communication unit which is connected to a server; a voice input unit which receives a user's speech; a voice processor which processes a performance of a preset corresponding operation according to a voice command corresponding to the speech; and a controller which processes the voice command corresponding to the speech through one of the voice processor and the server if the speech is input through the voice input unit. If the voice command includes a keyword relating to a call sign of a broadcasting channel, the controller controls one of the voice processor and the server to select a recommended call sign corresponding to the keyword according to a predetermined selection condition, and performs a corresponding operation under the voice command with respect to the broadcasting channel of the recommended call sign.




image

Image-based character recognition

Various embodiments enable a device to perform tasks such as processing an image to recognize and locate text in the image, and providing the recognized text an application executing on the device for performing a function (e.g., calling a number, opening an internet browser, etc.) associated with the recognized text. In at least one embodiment, processing the image includes substantially simultaneously or concurrently processing the image with at least two recognition engines, such as at least two optical character recognition (OCR) engines, running in a multithreaded mode. In at least one embodiment, the recognition engines can be tuned so that their respective processing speeds are roughly the same. Utilizing multiple recognition engines enables processing latency to be close to that of using only one recognition engine.




image

Analysis of images located within three-dimensional environments

Images are analyzed within a 3D environment that is generated based on spatial relationships of the images and that allows users to experience the images in the 3D environment. Image analysis may include ranking images based on user viewing information, such as the number of users who have viewed an image and how long an image was viewed. Image analysis may further include analyzing the spatial density of images within a 3D environment to determine points of user interest.




image

Management of multiple software images with shared memory blocks

A data processing entity that includes a mass memory with a plurality of memory locations for storing memory blocks. Each of a plurality of software images includes a plurality of memory blocks with corresponding image addresses within the software image. The memory blocks of software images stored in boot locations of a current software image are relocated. The boot blocks of the current software image are stored into the corresponding boot locations. The data processing entity is booted from the boot blocks of the current software image in the corresponding boot locations, thereby loading the access function. Each request to access a selected memory block of the current software image is served by the access function, with the access function accessing the selected memory block in the associated memory location provided by the control structure.




image

Discarding sensitive data from persistent point-in-time image

A network storage server implements a method to discard sensitive data from a Persistent Point-In-Time Image (PPI). The server first efficiently identifies a dataset containing the sensitive data from a plurality of datasets managed by the PPI. Each of the plurality of datasets is read-only and encrypted with a first encryption key. The server then decrypts each of the plurality of datasets, except the dataset containing the sensitive data, with the first encryption key. The decrypted datasets are re-encrypted with a second encryption key, and copied to a storage structure. Afterward, the first encryption key is shredded.




image

Image forming apparatus, system-on-chip (SoC) unit, and driving method thereof

An image forming apparatus is connected to a host device including first and second power domains which are separately supplied with power and includes first and second memories to be disposed in the second power domain, a main controller disposed in the first power domain and to perform a control operation using the first memory in a normal mode, and a sub-controller disposed in the second power domain and perform a control operation using the second memory in a power-saving mode, where when the normal mode is changed to the power-saving mode a power supply to the first power domain is shut off, the first memory operates in a self-refresh mode, and the main controller copies central processing unit (CPU) context information into a context storage unit, and when the power-saving mode is changed to the normal mode, the main controller is booted using the CPU context information stored in the context storage unit.




image

Velocity measurement of MR-imaged fluid flows

Velocity of MR-imaged fluid flows is measured. Data representing a measure of distance traveled by flowing fluid appearing in at least two MR images of a subject's tissue taken at different respective imaging times is generated. Data representing at least one fluid velocity measurement of the flowing fluid is generated by calculating at least one instance of distance traveled by the fluid divided by elapsed time during travel based on different respective imaging times. Data representing at least one fluid velocity measurement is then output to at least one of: (a) a display screen, (b) a non-transitory data storage medium, and (c) a remotely located site.




image

System and method for acquiring images from within a tissue

Systems and methods for imaging within depth layers of a tissue include illuminating light rays at different changing wavelengths (frequencies), collimating illuminated light rays using a collimator, and splitting light rays using a beam splitter, such that some of the light rays are directed towards a reference mirror and some of the rays are directed towards the tissue. The systems and methods further include reflecting light rays from the reference mirror towards the imager, filtering out non-collimated light rays reflected off the tissue by using a telecentric optical system, and reflecting collimated light rays reflected off the tissue towards the imager, thus creating an image of an interference pattern based on collimated light rays reflected off the tissue and off the reference mirror. The method may further include creating full 2D images from the interference pattern for each depth layer of the tissue using Fast Fourier transform.




image

Apparatus and methods for determining a plurality of local calibration factors for an image

Apparatus and methods are described including acquiring a first set of extraluminal images of a lumen, using an extraluminal imaging device. At least one of the first set of images is designated as a roadmap image. While an endoluminal device is being moved through the lumen, a second set of extraluminal images is acquired. A plurality of features that are visible within images belonging to the second set of extraluminal images are identified. In response to the identified features in the images belonging to the second set of extraluminal images, a plurality of local calibration factors associated with respective portions of the roadmap image are determined. Other applications are also described.




image

Coating composition for low refractive layer including fluorine-containing compound, anti-reflection film using the same, polarizer and image display device including the same

Provided are a coating composition for low refractive layer including fluorine-containing compound of the following Chemical Formula 1, an anti-reflection film using the same, and a polarizer and an image display device including the same, wherein the fluorine-containing compound of the following Chemical Formula 1 has a low refractive index of 1.28 to 1.40, thereby making it possible to easily adjust a refractive index of the anti-reflection film and be usefully used as a coating material of the anti-reflection film having an excellent mechanical property such as durability, or the like.




image

Hard coat film, polarizer and image display device

Provided is a hard coat film having high hardness and excellent restorability in view of the above circumstances. A hard coat film comprising: a light-transmitting substrate; and a hard coat layer, the hard coat layer comprising a cured product of a composition for a hard coat layer, the composition including an isocyanuric skeleton-containing urethane(meth)acrylate.




image

Polarizing plate, method for producing same and image display device comprising same

The present invention relates to a polarizing plate, a method for producing the same, and an image display device comprising the same, and more specifically to a polarizing plate which is characterized by comprising: a) a polarizer, b) a hardening resin layer which is provided on at least one side of the polarizer and formed from a photocurable composition comprising: 4 to 95 parts by weight of (A) a photocurable acrylic polymer, 4 to 95 parts by weight of (B) a poly-functional acrylic monomer, and 1 to 20 parts by weight of (C) a photo-polymerization initiator, based on 100 parts by weight of the photocurable composition, a method for manufacturing the same, and an image display device using the same. According to the present invention, a polarizing plate, which exhibits excellent polarizing properties and durability, has high surface hardness, and may be formed as a thin plate, may be provided.




image

Image processing method and image processing apparatus

To provide an image processing method including at least one of recording an image onto a thermoreversible recording medium in which transparency or color tone reversibly changes depending upon temperature, by applying a laser beam with the use of a CO2 laser device so as to heat the thermoreversible recording medium, and erasing an image recorded on the thermoreversible recording medium, by heating the thermoreversible recording medium, wherein an intensity distribution of the laser beam applied in the image recording step satisfies the relationship represented by Expression 1 shown below, 1.59




image

Laser imageable polyolefin film

The presently disclosed subject matter is directed generally to a polymeric film that comprises at least one laser imageable marking layer. The marking layer comprises a polyolefin, a photochromatic pigment, and an additive. It has been surprisingly discovered that a polyolefin film comprising a marking layer formulated with a photochromatic pigment and an additive offers a substantial advantage over prior art methods of laser imaging polyolefin films.




image

Thermal image receiver elements prepared using aqueous formulations

A thermal image receiver element dry image receiving layer has a Tg of at least 25° C. as the outermost layer. The dry image receiving layer has a dry thickness of at least 0.5 μm and up to and including 5 μm. It comprises a polymer binder matrix that consists essentially of: (1) a water-dispersible acrylic polymer comprising chemically reacted or chemically non-reacted hydroxyl, phospho, phosphonate, sulfo, sulfonate, carboxy, or carboxylate groups, and (2) a water-dispersible polyester that has a Tg of 30° C. or less. The water-dispersible acrylic polymer is present in an amount of at least 55 weight % of the total dry image receiving layer weight and at a dry ratio to the water-dispersible polyester of at least 1:1 to and including 20:1. The thermal image receiver element can be used to prepare thermal dye images after thermal transfer from a thermal donor element.




image

Thermal image receiver elements having release agents

A thermal image receiver element dry image receiving layer has a Tg of at least 25° C. and is the outermost layer. The dry image receiving layer has a dry thickness of at least 0.5 μm and up to and including 5 μm. It comprises a water-dispersible release agent and a polymer binder matrix that consists essentially of: (1) a water-dispersible acrylic polymer comprising chemically reacted or chemically non-reacted hydroxyl, phospho, phosphonate, sulfo, sulfonate, carboxy, or carboxylate groups, and (2) a water-dispersible polyester that has a Tg of 30° C. or less. The water-dispersible acrylic polymer is present in an amount of at least 55 weight % and at a dry ratio to the water-dispersible polyester of at least 1:1. The thermal image receiver element can be used to prepare thermal dye images after thermal transfer from a thermal donor element.




image

Vinyl chloride-based resin latexes, processes for producing the same, and thermal transfer image-receiving sheet obtained using the same

A vinyl chloride-based resin latex which froths little when unreacted monomer remaining in the latex are recovered under heat and reduced-pressure conditions, and a thermal transfer image-receiving sheet which has satisfactory water resistance, does not yellow during storage, and gives images having excellent durability and light resistance. The invention provides a vinyl chloride-based resin latex contains a copolymer containing a vinyl chloride and an epoxy-group-containing vinyl or contains vinyl chloride, an epoxy-group-containing vinyl, and a carboxylic acid vinyl ester, wherein a content of the epoxy-group-containing vinyl is 0.1% by weight or more but less than 3% by weight, and wherein the latex contains no surfactant, and has a solid concentration of 25% by weight or more; a process for producing the latex; and a thermal transfer image-receiving sheet obtained using the latex.




image

Metallized thermal dye image receiver elements and imaging

A thermal dye image receiver element has a substrate comprising a voided compliant layer and metalized layer. Disposed on the metalized layer is an opacifying layer that includes an opacifying agent and a dye receiving layer. This thermal dye image receiver element can be a duplex element with image receiving layers on both sides of the substrate, and it can be used in association with a thermal donor element to provide a thermal image on either or opposing sides of the receiver element. The metalized layer provides increased specular reflectance under resulting thermal dye images.




image

Imagewise priming of non-D2T2 printable substrates for direct D2T2 printing

A method for enabling D2T2 printing onto non-D2T2 printable substrates uses a diffusible primer material provided on a dye-sheet or ribbon. The primer comprises a polymer, a release agent and a plasticizer. The release agent and the plasticizer are diffused into the substrate, while the polymer remains on the dye-sheet or ribbon. Printing of the primer onto the PC substrate is controlled via a computer image program corresponding to a colored image. This computer image program also controls the printing of the colored image at the primed locations. Accordingly, image-wise treatment of a plastic material via the primer selectively renders the PC substrate surface D2T2 printable at the point of personalization, providing for a 100% PC full card body having the colored image.




image

Thermal transfer image-receiving sheet and manufacturing method for thermal transfer image-receiving sheet

Disclosed is a thermal transfer image-receiving sheet which excels in adhesiveness to a receiving layer and solvent resistance, and a manufacturing method thereof. In the thermal transfer image-receiving sheet, which includes a porous layer, a barrier layer, a receiving layer which are stacked in this order on a substrate, the porous layer includes a binder resin and hollow particles, and the barrier layer includes (i) (A) a first acrylic resin and (B) one or more kinds of resins selected from the group consisting of polyester resins, polyvinyl pyrrolidone type resins, polyester type urethane resin, and a second acrylic resin which differs from the first acrylic resin; or (ii) a polyvinyl pyrrolidone type resin.




image

Method of foil transfer employing foil transferring face forming toner and image forming method

A method of transferring a foil comprising: forming a foil transferring face on a photoreceptor employing a foil transferring face forming toner; transferring the foil transferring face onto a base substance, followed by fixing the foil transferring face; supplying a transfer foil having at least a foil and an adhesive layer on the base substance having the fixed foil transferring face, heating the transfer foil and the foil transferring face while the adhesive layer of the transfer foil is in contact with the foil transferring face to adhere the foil onto the foil transferring face; removing the transfer foil from the base substance while leaving the foil adhered onto the foil transferring face, wherein the foil transferring face forming toner comprises at least a binder resin, wherein the binder resin comprises a polymer formed by using a vinyl monomer comprising at least a carboxyl group.




image

Foil transferring apparatus and image forming system using the same

In a first thermal transfer portion of upstream side, a negative toner image forming portion forms on a photosensitive drum a desired negative toner image which reverses a desired positive toner image selected from all the toner images. The negative toner image forming portion then forms the desired negative toner image on a belt member. The first thermal transfer portion transfers a desired negative foil image from a foil sheet to the belt member so that a desired positive foil image remains on the foil sheet. A second transfer portion transfers the desired positive foil image thus remained on the desired positive toner image formed on the sheet of paper. A cleaning portion removes the desired negative toner image and the desired negative foil image from the belt member.




image

Amine compound, electrophotographic photoconductor, image forming method, image forming apparatus, and process cartridge

To provide an amine compound, represented by General Formula (I) below: [In General Formula (I), R1 and R2 represent a substituted or unsubstituted alkyl group, a substituted or unsubstituted aralkyl group, or a substituted or unsubstituted aromatic hydrocarbon group, which may be identical or different; m and n are an integer of 1 or 0; Ar1 represents a substituted or unsubstituted aromatic hydrocarbon group; Ar2 and Ar3 represent a substituted or unsubstituted alkyl group, a substituted or unsubstituted aralkyl group, or a substituted or unsubstituted aromatic hydrocarbon group; and Ar1 and Ar2 or Ar2 and Ar3 may bind to each other to form a substituted or unsubstituted heterocyclic group including a nitrogen atom.]




image

Carrier, two-component developer using the same, and image-forming apparatus using said developer

The present invention provides a carrier for a two-component electrophotographic developer, comprising a core particle and a thermoset silicone resin layer coated thereon, wherein said layer comprises a charge control agent and is formed by heat-treatment at a temperature below the melting point of said charge control agent.




image

Driving device and image forming apparatus

A driving device includes a stretched member, and a first rotation member and a second rotation member that support the stretched member in a stretched manner. The first rotation member has a first rotation axis, and the second rotation member has a second rotation axis. The first rotation member includes a plurality of members arranged in an axial direction of the first rotation axis.




image

Belt unit, fixing device and image forming apparatus

A belt unit includes an endless belt member, a first roller provided on an inner circumferential surface side of the belt member, and a stretching member provided on the inner circumferential surface side of the belt member. The stretching member is configured to stretch the belt member. A circumferential length of the belt member at a center portion in a widthwise direction of the belt member is shorter than a circumferential length of the belt member at an end portion in the widthwise direction of the belt member.




image

Three-dimensional image sensor and mobile device including same

A 3D image sensor includes a depth pixel that includes; a photo detector generating photo-charge, first and second floating diffusion regions, a first transfer transistor transferring photo-charge to the first floating diffusion region during a first transfer period in response to a first transfer gate signal, a second transfer transistor transferring photo-charge to the second floating diffusion region during a second transfer period in response to a second transfer gate signal, and an overflow transistor that discharges surplus photo-charge in response to a drive gate signal. Control logic unit controlling operation of the depth pixel includes a first logic element providing the first transfer gate signal, a second logic element providing the second transfer gate signal, and another logic element providing the drive gate signal to the overflow transistor when the first transfer period overlaps, at least in part, the second transfer period.




image

Range sensor and range image sensor

The range image sensor is a range image sensor which is provided on a semiconductor substrate with an imaging region composed of a plurality of two-dimensionally arranged units (pixel P), thereby obtaining a range image on the basis of charge quantities QL, QR output from the units. One of the units is provided with a charge generating region (region outside a transfer electrode 5) where charges are generated in response to incident light, at least two semiconductor regions 3 which are arranged spatially apart to collect charges from the charge generating region, and a transfer electrode 5 which is installed at each periphery of the semiconductor region 3, given a charge transfer signal different in phase, and surrounding the semiconductor region 3.




image

Feature value estimation device and corresponding method, and spectral image processing device and corresponding method

An estimation device is configured to estimate a feature value of a specific component contained in a sample and includes: a spectral estimation parameter storage module; a calibration parameter storage module; a multiband image acquirer; an optical spectrum operator configured to compute an optical spectrum from a multiband image using a spectral estimation parameter; and a calibration processor configured to compute the feature value from the optical spectrum using a calibration parameter.