are

Method and apparatus for performing logical compare operations

A method and apparatus for including in a processor instructions for performing logical-comparison and branch support operations on packed or unpacked data. In one embodiment, instruction decode logic decodes instructions for an execution unit to operate on packed data elements including logical comparisons. A register file including 128-bit packed data registers stores packed single-precision floating point (SPFP) and packed integer data elements. The logical comparisons may include comparison of SPFP data elements and comparison of integer data elements and setting at least one bit to indicate the results. Based on these comparisons, branch support actions are taken. Such branch support actions may include setting the at least one bit, which in turn may be utilized by a branching unit in response to a branch instruction. Alternatively, the branch support actions may include branching to an indicated target code location.




are

Method and apparatus for performing logical compare operation

A method and apparatus for including in a processor instructions for performing logical-comparison and branch support operations on packed or unpacked data. In one embodiment, instruction decode logic decodes instructions for an execution unit to operate on packed data elements including logical comparisons. A register file including 128-bit packed data registers stores packed single-precision floating point (SPFP) and packed integer data elements. The logical comparisons may include comparison of SPFP data elements and comparison of integer data elements and setting at least one bit to indicate the results. Based on these comparisons, branch support actions are taken. Such branch support actions may include setting the at least one bit, which in turn may be utilized by a branching unit in response to a branch instruction. Alternatively, the branch support actions may include branching to an indicated target code location.




are

Workload migration between virtualization softwares

A virtual machine (VM) migration from a source virtual machine monitor (VMM) to a destination VMM on a computer system. Each of the VMMs includes virtualization software, and one or more VMs are executed in each of the VMMs. The virtualization software allocates hardware resources in a form of virtual resources for the concurrent execution of one or more VMs and the virtualization software. A portion of a memory of the hardware resources includes hardware memory segments. A first portion of the memory segments is assigned to a source logical partition and a second portion is assigned to a destination logical partition. The source VMM operates in the source logical partition and the destination VMM operates in the destination logical partition. The first portion of the memory segments is mapped into a source VMM memory, and the second portion of the memory segments is mapped into a destination VMM memory.




are

Hardware streaming unit

A processor having a streaming unit is disclosed. In one embodiment, a processor includes one or more execution units configured to execute instructions of a processor instruction set. The processor further includes a streaming unit configured to execute a first instruction of the processor instruction set, wherein executing the first instruction comprises the streaming unit loading a first data stream from a memory of a computer system responsive to execution of a first instruction. The first data stream comprises a plurality of data elements. The first instruction includes a first argument indicating a starting address of the first stream, a second argument indicating a stride between the data elements, and a third argument indicative of an ending address of the stream. The streaming unit is configured to output a second data stream corresponding to the first data stream.




are

Method and system for heterogeneous filtering framework for shared memory data access hazard reports

A system and method for detecting, filtering, prioritizing and reporting shared memory hazards are disclosed. The method includes, for a unit of hardware operating on a block of threads, mapping a plurality of shared memory locations assigned to the unit to a tracking table. The tracking table comprises initialization information for each shared memory location. The method also includes, for an instruction of a program within a barrier region, identifying a potential conflict by identifying a second access to a location in shared memory within a block of threads executed by the hardware unit. First information associated with a first access and second information associated with the second access to the location is determined. Filter criteria is applied to the first and second information to determine whether the instruction causes a reportable hazard. The instruction is reported when it causes the reportable hazard.




are

Managing utilization of physical processors of a shared processor pool in a virtualized processor environment

Systems, methods and computer program products may provide managing utilization of one or more physical processors in a shared processor pool. A method of managing utilization of one or more physical processors in a shared processor pool may include determining a current amount of utilization of the one or more physical processors and generating an instruction message. The instruction message may be at least partially determined by the current amount of utilization. The method may further include sending the instruction message to a guest operating system, the guest operating system having a number of enabled virtual processors.




are

System, method and program product for cost-aware selection of stored virtual machine images for subsequent use

A system, method and computer program product for allocating shared resources. Upon receiving requests for resources, the cost of bundling software in a virtual machine (VM) image is automatically generated. Software is selected by the cost for each bundle according to the time required to install it where required, offset by the time to uninstall it where not required. A number of VM images having the highest software bundle value (i.e., highest cost bundled) is selected and stored, e.g., in a machine image store. With subsequent requests for resources, VMs may be instantiated from one or more stored VM images and, further, stored images may be updated selectively updated with new images.




are

System and method for performing memory management using hardware transactions

The systems and methods described herein may be used to implement a shared dynamic-sized data structure using hardware transactional memory to simplify and/or improve memory management of the data structure. An application (or thread thereof) may indicate (or register) the intended use of an element of the data structure and may initialize the value of the data structure element. Thereafter, another thread or application may use hardware transactions to access the data structure element while confirming that the data structure element is still part of the dynamic data structure and/or that memory allocated to the data structure element has not been freed. Various indicators may be used determine whether memory allocated to the element can be freed.




are

Virtualization and dynamic resource allocation aware storage level reordering

A system and method for reordering storage levels in a virtualized environment includes identifying a virtual machine (VM) to be transitioned and determining a new storage level order for the VM. The new storage level order reduces a VM live state during a transition, and accounts for hierarchical shared storage memory and criteria imposed by an application to reduce recovery operations after dynamic resource allocation actions. The new storage level order recommendation is propagated to VMs. The new storage level order applied in the VMs. A different storage-level order is recommended after the transition.




are

Remediating gaps between usage allocation of hardware resource and capacity allocation of hardware resource

A usage allocation of a hardware resource to each of a number of workloads over time is determined using a demand model. The usage allocation of the resource includes a current and past actual usage allocation of the resource, a future projected usage allocation of the resource, and current and past actual usage of the resource. A capacity allocation of the resource is determined using a capacity model. The capacity allocation of the resource includes a current and past capacity and a future projected capacity of the resource. Whether a gap exists between the usage allocation and the capacity allocation is determined using a mapping model. Where the gap exists between the usage allocation of the resource and the capacity allocation of the resource, a user is presented with options determined using the mapping model and selectable by the user to implement a remediation strategy to close the gap.




are

Managing access to a shared resource by tracking active requestor job requests

The technology of the present application provides a networked computer system with at least one workstation and at least one shared resource such as a database. Access to the database by the workstation is managed by a database management system. An access engine reviews job requests for access to the database and allows job requests access to the resource based protocols stored by the system.




are

Cerium containing nanoparticles prepared in non-polar solvent

A method of making cerium-containing metal oxide nanoparticles in non-polar solvent eliminates the need for solvent shifting steps. The direct synthesis method involves: (a) forming a reaction mixture of a source of cerous ion and a carboxylic acid, and optionally, a hydrocarbon solvent; and optionally further comprises a non-cerous metal ion; (b) heating the reaction mixture to oxidize cerous ion to ceric ion; and (c) recovering a nanoparticle of either cerium oxide or a mixed metal oxide comprising cerium. The cerium-containing oxide nanoparticles thus obtained have cubic fluorite crystal structure and a geometric diameter in the range of about 1 nanometer to about 20 nanometers. Dispersions of cerium-containing oxide nanoparticles prepared by this method can be used as a component of a fuel or lubricant additive.




are

Implementation of multi-tasking on a digital signal processor with a hardware stack

The disclosure relates to the implementation of multi-tasking on a digital signal processor. Blocking functions are arranged such that they do not make use of a processor's hardware stack. Respective function calls are replaced with a piece of inline assembly code, which instead performs a branch to the correct routine for carrying out said function. If a blocking condition of the blocking function is encountered, a task switch can be done to resume another task. Whilst the hardware stack is not used when a task switch might have to occur, mixed-up contents of the hardware stack among function calls performed by different tasks are avoided.




are

Information processing apparatus for restricting access to memory area of first program from second program

A processor determines whether a first program is under execution when a second program is executed, and changes a setting of a memory management unit based on access prohibition information so that a fault occurs when the second program makes an access to a memory when the first program is under execution. Then, the processor determines whether an access from the second program to a memory area used by the first program is permitted based on memory restriction information when the fault occurs while the first program and the second program are under execution, and changes the setting of the memory management unit so that the fault does not occur when the access to the memory area is permitted.




are

Method and system for managing hardware resources to implement system functions using an adaptive computing architecture

An adaptable integrated circuit is disclosed having a plurality of heterogeneous computational elements coupled to an interconnection network. The interconnection network changes interconnections between the plurality of heterogeneous computational elements in response to configuration information. A first group of computational elements is allocated to form a first version of a functional unit to perform a first function by changing interconnections in the interconnection network between the first group of heterogeneous computational elements. A second group of computational elements is allocated to form a second version of a functional unit to perform the first function by changing interconnections in the interconnection network between the second group of heterogeneous computational elements. One or more of the first or second group of heterogeneous computational elements are reallocated to perform a second function by changing the interconnections between the one or more of the first or second group of heterogeneous computational elements.




are

Shared load-store unit to monitor network activity and external memory transaction status for thread switching

An array of a plurality of processing elements (PEs) are in a data packet-switched network interconnecting the PEs and memory to enable any of the PEs to access the memory. The network connects the PEs and their local memories to a common controller. The common controller may include a shared load/store (SLS) unit and an array control unit. A shared read may be addressed to an external device via the common controller. The SLS unit can continue activity as if a normal shared read operation has taken place, except that the transactions that have been sent externally may take more cycles to complete than the local shared reads. Hence, a number of transaction-enabled flags may not have been deactivated even though there is no more bus activity. The SLS unit can use this state to indicate to the array control unit that a thread switch may now take place.




are

Hardware assist thread for increasing code parallelism

Mechanisms are provided for offloading a workload from a main thread to an assist thread. The mechanisms receive, in a fetch unit of a processor of the data processing system, a branch-to-assist-thread instruction of a main thread. The branch-to-assist-thread instruction informs hardware of the processor to look for an already spawned idle thread to be used as an assist thread. Hardware implemented pervasive thread control logic determines if one or more already spawned idle threads are available for use as an assist thread. The hardware implemented pervasive thread control logic selects an idle thread from the one or more already spawned idle threads if it is determined that one or more already spawned idle threads are available for use as an assist thread, to thereby provide the assist thread. In addition, the hardware implemented pervasive thread control logic offloads a portion of a workload of the main thread to the assist thread.




are

Data mover moving data to accelerator for processing and returning result data based on instruction received from a processor utilizing software and hardware interrupts

Efficient data processing apparatus and methods include hardware components which are pre-programmed by software. Each hardware component triggers the other to complete its tasks. After the final pre-programmed hardware task is complete, the hardware component issues a software interrupt.




are

Generating hardware events via the instruction stream for microprocessor verification

A processor receives an instruction operation (OP) code from a verification system. The instruction OP code includes instruction bits and forced event bits. The processor identifies a forced event based upon the forced event bits, which is unrelated to an instruction that corresponds to the instruction bits. In turn, the processor executes the forced event.




are

System for selecting software components based on a degree of coherence

Disclosed is a novel system and method to select software components. A set of available software components are accessed. Next, one or more dimensions are defined. Each dimension is an attribute to the set of available software components. A set of coherence distances between each pair of the available software components in the set of available software components is calculated for each of the dimensions that have been defined. Each of the coherence distances are combined between each pair of the available software components that has been calculated in the set of the coherence distances into an overall coherence degree for each of the available software components. Using the overall coherence degree, one or more software components are selected to be included in a software bundle.




are

System and method for recommending software artifacts

A method for recommending at least one artifact to an artifact user is described. The method includes obtaining user characteristic information reflecting preferences, particular to the artifact user, as to a desired artifact. The method also includes obtaining first metadata about each of one or more candidate artifacts, and scoring, as one or more scored artifacts, each of the one or more candidate artifacts by evaluating one or more criteria, not particular to the artifact user, applied to the first metadata. The method further includes scaling, as one or more scaled artifacts, a score of each of the one or more scored artifacts, by evaluating the suitability of each of the one or more scored artifacts in view of the user characteristic information. The method lastly includes recommending to the artifact user at least one artifact from among the one or more scaled artifacts based on its scaled score.




are

Identifying differences between source codes of different versions of a software when each source code is organized using incorporated files

An aspect of the present invention identifies differences between source codes (e.g. of different versions of a software), when each source code is organized using incorporated files. In one embodiment, in response to receiving identifiers of a first and second source codes (each source code being organized as a corresponding set of code files), listings of the instructions in the first and second source codes are constructed. Each listing is constructed, for example, by replacing each incorporate statement in the source code with instructions stored in a corresponding one of code files. The differences between the first and second source codes are then found by comparing the constructed listings of instructions.




are

System and method for generating software unit tests simultaneously with API documentation

A system and method may generate unit tests for source code concurrently with API documentation. The system may receive a source code file including several comments sections. Each comments section may include a description of a source code unit such as a class, method, member variable, etc. The description may also correspond to input and output parameters the source code unit. The system and method may parsing the source code file to determine a source code function type corresponding to the unit description and copy the unit description to a unit test stub corresponding to the function type. A developer or another module may then complete the unit test stub to transform each stub into a complete unit test corresponding to the source code unit. Additionally, the system and method may execute the unit test and generate a test result indication for each unit test.




are

Program module applicability analyzer for software development and testing for multi-processor environments

In one embodiment, a machine-implemented method programs a heterogeneous multi-processor computer system to run a plurality of program modules, wherein each program module is to be run on one of the processors The system includes a plurality of processors of two or more different processor types. According to the recited method, machine-implemented offline processing is performed using a plurality of SIET tools of a scheduling information extracting toolkit (SIET) and a plurality of SBT tools of a schedule building toolkit (SBT). A program module applicability analyzer (PMAA) determines whether a first processor of a first processor type is capable of running a first program module without compiling the first program module. Machine-implemented online processing is performed using realtime data to test the scheduling software and the selected schedule solution.




are

Software modification methods to provide master-slave execution for multi-processing and/or distributed parallel processing

In one embodiment of the invention, a method is disclosed for modifying a pre-existing application program for multi-processing and/or distributed parallel processing. The method includes searching an application program for a computational loop; analyzing the computational loop to determine independence of the computational transactions of the computational loop; and replacing the computational loop with master code and slave code to provide master-slave execution of the computational loop in response to analyzing the computational loop to determine independence of the computational transactions of the computational loop. Multiple instances of the modified application program are executed to provide multi-processing and/or distributed parallel processing.




are

Method and system for upgrading software

Embodiments of the present disclosure provide a method and a system for upgrading software. The method includes: a client reports a software upgrade request to a server, wherein the upgrade request carries file information of the local software to be upgraded; the server determines the difference with the latest version software according to the file information of the software to be upgraded in the upgrade request, and generates upgrade instruction information according to the difference and sends it to the client; the client downloads and updates the relevant files and performs the relevant local upgrade operations according to the instructions in received upgrade instruction information. Technical solutions of the present disclosure can save bandwidth resources and reduce the workload for upgrading software.




are

Predictive software streaming

A software streaming platform may be implemented that predictively chooses units of a program to download based on the value of downloading the unit. In one example, a program is divided into blocks. The sequence in which blocks of the program historically have been requested is analyzed in order to determine, for a given history, what block is the next most likely to be requested. Blocks then may be combined into chunks, where each chunk represents a chain of blocks that have a high likelihood of occurring in a sequence. A table is then constructed indicating, for a given chunk, the chunks that are most likely to follow the given chunk. Based on the likelihood table and various other considerations, the value of downloading particular chunks is determined, and the chunk with the highest expected value is downloaded.




are

Firmware update method and apparatus of set-top box for digital broadcast system

A firmware update method and apparatus of a set-top box for a digital broadcast system is provided. A firmware update method of a set-top box for a digital broadcast system includes determining whether a newly received Code Version Table (CVT) following a public CVT which has been previously received and stored is the public CVT or a filtering CVT; and updating, when the newly received CVG is the filtering CVT, the firmware of the set-top box with a filtering firmware indicated by the filtering CVT.




are

Method, apparatus and computer program for determining the location of a user in an area

Apparatus for orientating a user in a space wherein the space comprises a plurality of zones of which only certain zones constitute functional zones wherein each functional zone includes a first type device containing information relating to the position of the zone in the space and wherein the first type device is reactive to the presence of a second type device associated with the user to provide the user with the information to determine the orientation of the user in the space. A method of orientating the user within the space and guiding the user toward one or more features in the space is also disclosed.




are

Verification module apparatus for debugging software and timing of an embedded processor design that exceeds the capacity of a single FPGA

A plurality of Field Programmable Gate Arrays (FPGA), high performance transceivers, and memory devices provide a verification module for timing and state debugging of electronic circuit designs. Signal value compression circuits and gigabit transceivers embedded in each FPGA increase the fanout of each FPGA. Ethernet communication ports enable remote software debugging of processor instructions.




are

Synthesis of fast squarer functional blocks

In one embodiment of the invention, an integrated circuit (IC) design tool is provided for synthesizing logic, including one or more software modules to synthesize a gate-level netlist of a squarer functional block. The software modules include a bitvector generator, a bitvector reducer, and a hybrid multibit adder generator. The bitvector generator multiplies bits of a vector together to generate partial products for a plurality of bitvectors and then optimizes a plurality of least significant bitvectors. The bitvector reducer reduces the partial products in the bitvectors of the squarer functional block down to a pair of final vectors. The hybrid multibit adder generator generates a hybrid multibit adder including a first adder and a second adder coupled together by a carry bit with bit widths being responsive to a dividerbit. The hybrid multibit adder adds the pair of final vectors together to generate a final result for the squarer functional block.




are

Personal care compositions with improved hyposensitivity

The present invention provides personal care compositions comprising a carrier and a mixture of essential oil components having specific levels of eucalyptol, terpene materials and auxiliary fragrance materials. The compositions herein gentle to skin and have a fragrance and activity similar if the composition were made using the pure extracted essential oil.




are

Topical skin care formulations comprising plant extracts

Disclosed are topical skin compositions and corresponding methods of their use that include an extract from Artabotrys hexapetalus, an extract from Sassafras tzumu, and an extract from Prunus salicina.




are

Methods and apparatus to generate and use content-aware watermarks

Methods and apparatus to generate and use content-aware watermarks are disclosed herein. In a disclosed example method, media composition data is received and at least one word present in an audio track of the media composition data is selected. The word is then located in a watermark.




are

System, method and program product for providing automatic speech recognition (ASR) in a shared resource environment

A speech recognition system, method of recognizing speech and a computer program product therefor. A client device identified with a context for an associated user selectively streams audio to a provider computer, e.g., a cloud computer. Speech recognition receives streaming audio, maps utterances to specific textual candidates and determines a likelihood of a correct match for each mapped textual candidate. A context model selectively winnows candidate to resolve recognition ambiguity according to context whenever multiple textual candidates are recognized as potential matches for the same mapped utterance. Matches are used to update the context model, which may be used for multiple users in the same context.




are

Device, method, and graphical user interface for managing concurrently open software applications

A method includes displaying a first application view. A first input is detected, and an application view selection mode is entered for selecting one of concurrently open applications for display in a corresponding application view. An initial group of open application icons in a first predefined area and at least a portion of the first application view adjacent to the first predefined area are concurrently displayed. The initial group of open application icons corresponds to at least some of the concurrently open applications. A gesture is detected on a respective open application icon in the first predefined area, and a respective application view for a corresponding application is displayed without concurrently displaying an application view for any other application in the concurrently open applications. The open application icons in the first predefined area cease to be displayed, and the application view selection mode is exited.




are

Substituted phenylcarbamoyl alkylamino arene compounds and N,N'-BIS-arylurea compounds

Substituted phenylcarbamoyl alkylamino arenes; substituted phenylthiocarbamyl alkylamino arenes; substituted phenylcarbamoyl alkylamino heteroarenes; substituted phenylthiocarbamyl alkylamino heteroarenes; N-substituted aryl, N'-substituted aryl urea compounds; N-substituted aryl, N'-substituted heteroaryl urea compounds; N-substituted aryl, N'-substituted aryl thiourea compounds and N-substituted aryl, N'-substituted heteroaryl thiourea compounds are provided and may find use as androgen receptor modulators. The compounds may find particular use in treating prostate cancer, including castration-resistant prostate cancer and/or hormone-sensitive prostate cancer.




are

Risk aware domain name service

A risk aware domain name service (DNS), which includes modulating a time to live (TTL) value associated with the DNS based at least in part on one or more DNS-related metrics associated with a DNS server providing DNS resolution is disclosed. A zone file that indicates a particular TTL value may be generated based at least in part on the one or more DNS-related metrics and provided to the DNS server.




are

Software-based aliasing for accessing multiple shared resources on a single remote host

In order to allow a single user registered on a single local host or other machine to access multiple shared resources on a remote host, an aliasing mechanism is employed so that multiple concurrent connections can be established by the user to a single remote host, with each connection using a different identity. Each connection can therefore be used to access a different shared resource on the remote host. In some illustrative examples, a user's identifier such as his or her machine log-in identification may be associated with two or more resource sharing aliases. As a result, two or more resource sharing sessions can be established by the user with a single remote host, with each of the sessions using a different one of the aliases. The resource sharing sessions are usually established in accordance with a resource sharing protocol such as the Server Block Message (SBM) protocol.




are

Time-locked cigarette case

A time-locked cigarette case has time-controlled locking mechanism which is manually adjustable by the user and also has a first latch rod which normally retains the case in a closed condition and a second latch rod which moves to retain the case in a closed condition if the first latch rod is jolted to an open position so as to prevent the case from being opened by jolting before the manually set time delay has expired.




are

Lighter and method for eliminating smoking that includes interactive self-learning software

Smoking cessation lighter is configured for lighting cigarettes for a smoker, and learning software is provided for monitoring smoking behavior of a smoker during a first data collection period and guiding a smoker's smoking cessation by directing the smoker when the smoker is to smoke a cigarette based on data collected during the first data collection period. The learning software monitors user behavior and collects data during use of the lighter by the smoker after the initial data collection period in order to analyze and further guide the smoker based on the smoker's cheating behavior, the smoker's behavior of lighting a cigarette for a friend, and the smoker's behavior of skipping use of the lighter at a time when the smoker has been directed to light a cigarette by the lighter.




are

Prefetch optimizer measuring execution time of instruction sequence cycling through each selectable hardware prefetch depth and cycling through disabling each software prefetch instruction of an instruction sequence of interest

A prefetch optimizer tool for an information handling system (IHS) may improve effective memory access time by controlling both hardware prefetch operations and software prefetch operations. The prefetch optimizer tool selectively disables prefetch instructions in an instruction sequence of interest within an application. The tool measures execution times of the instruction sequence of interest when different prefetch instructions are disabled. The tool may hold hardware prefetch depth constant while cycling through disabling different prefetch instructions and taking corresponding execution time measurements. Alternatively, for each disabled prefetch instruction in the instruction sequence of interest, the tool may cycle through different hardware prefetch depths and take corresponding execution time measurements at each hardware prefetch depth. The tool selects a combination of hardware prefetch depth and prefetch instruction disablement that may improve the execution time in comparison with a baseline execution time.




are

Dynamically expandable and contractible fault-tolerant storage system with virtual hot spare

A dynamically expandable and contractible fault-tolerant storage system employs a virtual hot spare that is created from unused storage capacity across a plurality of storage devices. This unused storage capacity is available if and when a storage device fails for storage of data recovered from the remaining storage device(s). On an ongoing basis, the storage system may determine the amount of unused storage capacity that would be required for the virtual hot spare (e.g., based on the number of storage devices, the capacities of the various storage devices, the amount of data stored, and the manner in which the data is stored) and generate a signal if additional storage capacity is needed for a virtual hot spare.




are

Management of multiple software images with shared memory blocks

A data processing entity that includes a mass memory with a plurality of memory locations for storing memory blocks. Each of a plurality of software images includes a plurality of memory blocks with corresponding image addresses within the software image. The memory blocks of software images stored in boot locations of a current software image are relocated. The boot blocks of the current software image are stored into the corresponding boot locations. The data processing entity is booted from the boot blocks of the current software image in the corresponding boot locations, thereby loading the access function. Each request to access a selected memory block of the current software image is served by the access function, with the access function accessing the selected memory block in the associated memory location provided by the control structure.




are

Method for preparing chlorohydrins composition and method for preparing epichlorohydrin using chlorohydrins composition prepared thereby

Provided are a method of preparing a chlorohydrin composition and a method of preparing epichlorohydrin by using a chlorohydrin composition prepared by using the method. The method of preparing chlorohydrins in which polyhydroxy aliphatic hydrocarbon is reacted with a chlorination agent in the presence of a catalyst includes performing at least one combination of a series of unit operations comprising a first reaction step, a water removal step, and a second reaction step in this stated order, wherein the method further includes mixing a chlorohydrin concentrate obtained by purifying the reaction mixture discharged from the final reaction step from among the reaction steps and a water-rich layer discharged from the water-removal step and diluting the mixture with water. The method of preparing epichlorohydrin includes contacting the chlorohydrin composition prepared by using the method of preparing a chlorohydrin composition with an alkaline agent.




are

Near infrared fluorogen and fluorescent activating proteins for in vivo imaging and live-cell biosensing

Tissue slices and whole organisms offer substantial challenges to fluorescence imaging. Autofluorescence and absorption via intrinsic chromophores, such as flavins, melanin, and hemoglobins, confound and degrade output from all fluorescent tags. An “optical window,” farther red than most autofluorescence sources and in a region of low hemoglobin and water absorbance, lies between 650 and 900 nm. This valley of relative optical clarity is an attractive target for fluorescence-based studies within tissues, intact organs, and living organisms. Novel fluorescent tags were developed herein, based upon a genetically targeted fluorogen activating protein and cognate fluorogenic dye that yields emission with a peak at 733 nm exclusively when complexed as a “fluoromodule”. This tool improves substantially over previously described far-red/NIR fluorescent proteins in terms of brightness, wavelength, and flexibility by leveraging the flexibility of synthetic chemistry to produce novel chromophores.




are

Photo-curable transparent resin composition

Provided is a photo-curable transparent resin in which an oxetane monomer for promotion of photo-curing, control of viscosity, and improvement of physical properties is mixed with a photo-cationically polymerizable cyclo-aliphatic epoxy group-containing oligosiloxane resin prepared by a sol-gel reaction. The photo-cationically polymerizable photo-curable transparent resin added with the oxetane monomer provides a cured product having high curing density and retaining excellent mechanical properties, thermal-mechanical properties, and electrical properties.




are

Thermal image receiver elements prepared using aqueous formulations

A thermal image receiver element dry image receiving layer has a Tg of at least 25° C. as the outermost layer. The dry image receiving layer has a dry thickness of at least 0.5 μm and up to and including 5 μm. It comprises a polymer binder matrix that consists essentially of: (1) a water-dispersible acrylic polymer comprising chemically reacted or chemically non-reacted hydroxyl, phospho, phosphonate, sulfo, sulfonate, carboxy, or carboxylate groups, and (2) a water-dispersible polyester that has a Tg of 30° C. or less. The water-dispersible acrylic polymer is present in an amount of at least 55 weight % of the total dry image receiving layer weight and at a dry ratio to the water-dispersible polyester of at least 1:1 to and including 20:1. The thermal image receiver element can be used to prepare thermal dye images after thermal transfer from a thermal donor element.




are

Leveraging transactional memory hardware to accelerate virtualization and emulation

Various technologies and techniques are disclosed for using transactional memory hardware to accelerate virtualization or emulation. State isolation can be facilitated by providing isolated private state on transactional memory hardware and storing the stack of a host that is performing an emulation in the isolated private state. Memory accesses performed by a central processing unit can be monitored by software to detect that a guest being emulated has made a self modification to its own code sequence. Transactional memory hardware can be used to facilitate dispatch table updates in multithreaded environments by taking advantage of the atomic commit feature. An emulator is provided that uses a dispatch table stored in main memory to convert a guest program counter into a host program counter. The dispatch table is accessed to see if the dispatch table contains a particular host program counter for a particular guest program counter.




are

Method for producing transparent conductive film, transparent conductive film, transparent conductive substrate and device comprising the same

Provided is a method for producing a transparent conductive film which is formed via a coating step, a drying step and a baking step, wherein the baking step is characterized in that the dried coating film containing the organic metal compound as the main component is baked by being heated to a baking temperature or higher, at which at least the inorganic component is crystallized, under an oxygen-containing atmosphere having a dewpoint of −10° C. or lower, whereby an organic component contained in the dried coating film is removed therefrom by a heat decomposition, a combustion or the combination thereof to thereby form a conductive oxide microparticle layer densely filled with conductive oxide microparticles containing the metal oxide as a main component.