t

Aqueous epoxy and organo-substituted branched organopolysiloxane emulsions

Aqueous emulsions of epoxy- and organo-substituted, branched organopolysiloxanes are prepared by emulsifying the latter in water with the aid of a dispersing agent. The emulsions are storage stable and are useful in multi-component coating, adhesive, and binder systems.




t

Method of testing integrity of microporous membrane

The present invention provides a method of testing the integrity of a microporous membrane using a colloid solution containing metal particles or metal compound particles that can accurately determine the integrity of a virus removal membrane formed of hydrophilized synthetic polymer that has been subjected to protein solution filtration, and to provide a method of producing the colloid solution. The colloid solution comprises a solvent and metal particles dispersed in the solvent, and the solvent comprises components (A) and (B), (A) and (C), or (A), (B), and (C), wherein the component (A) is an anionic polymer having a sulfonic acid group, the component (B) is at least one nonionic surfactant selected from the group consisting of a nonionic surfactant having a polycyclic structure in a hydrophobic moiety and a polyoxyethylene sorbitan fatty acid ester, and the component (C) is a water-soluble polymer having a pyrrolidone group.




t

Metal nanoparticle dispersion usable for ejection in the form of fine droplets to be applied in the layered shape

According to the present invention, a metal nanoparticle dispersion suitable to multiple layered coating by jetting in the form of fine droplets is prepared by dispersing metal nanoparticles having an average particle size of 1 to 100 nm in a dispersion solvent having a boiling point of 80° C. or higher in such a manner that the volume percentage of the dispersion solvent is selected in the range of 55 to 80% by volume and the fluid viscosity (20° C.) of the dispersion is chosen in the range of 2 mPa·s to 30 mPa·s, and then when the dispersion is discharged in the form of fine droplets by inkjet method or the like, the dispersion is concentrated by evaporation of the dispersion solvent in the droplets in the course of flight, coming to be a viscous dispersion which can be applicable to multi-layered coating.




t

Antibacterial sol-gel coating solution

Antibacterial sol-gel coating solutions are used to form articles. The antibacterial sol-gel coating solution includes at least one Ti or Si-containing compound that is capable of hydrolyzing to form a base film; a regulating agent capable of regulating the hydrolysis rate of the Ti or Si-containing compounds, an organic solvent, water, and at least one soluble compound of an antibacterial metal, such as Ag, Cu, Mg, Zn, Sn, Fe, Co, Ni, or Ce.




t

Method of synthesizing bulk transition metal carbide, nitride and phosphide catalysts

A method for synthesizing catalyst beads of bulk transmission metal carbides, nitrides and phosphides is provided. The method includes providing an aqueous suspension of transition metal oxide particles in a gel forming base, dropping the suspension into an aqueous solution to form a gel bead matrix, heating the bead to remove the binder, and carburizing, nitriding or phosphiding the bead to form a transition metal carbide, nitride, or phosphide catalyst bead. The method can be tuned for control of porosity, mechanical strength, and dopant content of the beads. The produced catalyst beads are catalytically active, mechanically robust, and suitable for packed-bed reactor applications. The produced catalyst beads are suitable for biomass conversion, petrochemistry, petroleum refining, electrocatalysis, and other applications.




t

Foams of graphene, method of making and materials made thereof

Method for making a liquid foam from graphene. The method includes preparing an aqueous dispersion of graphene oxide and adding a water miscible compound to the aqueous dispersion to produce a mixture including a modified form of graphene oxide. A second immiscible fluid (a gas or a liquid) with or without a surfactant are added to the mixture and agitated to form a fluid/water composite wherein the modified form of graphene oxide aggregates at the interfaces between the fluid and water to form either a closed or open cell foam. The modified form of graphene oxide is the foaming agent.




t

Aqueous delivery system for low surface energy structures

An aqueous delivery system is described including at least one surfactant and at least one water insoluble wetting agent. Further described are low surface energy substrates, such as microporous polytetrafluoroethylene, coated with such an aqueous solution so as to impart a change in at least one surface characteristic compared to the surface characteristics of the uncoated low surface energy substrate.




t

Defoaming agent

The present invention is a defoaming agent comprising a fatty acid amide (A), a base oil (B) that is liquid at 25° C., an oil thickening agent (C), and a surfactant (D), wherein the content of the fatty acid amide (A) is 1 to 10% by weight, the content of the base oil (B) that is liquid at 25° C. is 71 to 97.9% by weight, the content of the oil thickening agent (C) is 0.1 to 10% by weight, and the content of the surfactant (D) is 1 to 9% by weight based on the weight of the fatty acid amide (A), the base oil (B) that is liquid at 25° C., the oil thickening agent (C), and the surfactant (D), and the viscosity (25° C.) at a shear rate of 1000 s−1 is 0.1 to 1.0 Pa·s.




t

Defoamer for fermentation

Provided is a defoamer for fermentation which has excellent dispersibility in water and forms neither a precipitate nor oil droplets when the dispersion is heated, and which is highly effective in defoaming fermentation media. This defoamer contains a reaction product obtained by mixing a fat or oil having an iodine value of 40 to 130 with glycerin or like in a molar ratio of from 3/2 to 1/2 to obtain a mixture, causing 4 to 17 mol of propylene oxide to add to 1 mol of the mixture, and then causing 20 to 40 mol of ethylene oxide and 70 to 110 mol of propylene oxide to block-wise add thereto in this order, the reaction product having an ethylene oxide/propylene oxide molar ratio of from 1/4 to 2/5.




t

Oil-in-water silicone emulsion composition

Provided is an oil-in-water silicone emulsion composition that has a low silicone oligomer content, and that can form, even without the use of an organotin compound as a curing catalyst, a cured film that exhibits satisfactory strength and satisfactory adherence to a substrate, through the removal of water fraction. An oil-in-water silicone emulsion composition comprising (A) 100 mass parts of a polyorganosiloxane that contains in each molecule at least two groups selected from the group consisting of a silicon-bonded hydroxyl group, alkoxy group, and alkoxyalkoxy group, (B) 0.1 to 200 mass parts of a colloidal silica, (C) 0.1 to 100 mass parts of an aminoxy group-containing organosilicon compound that has in each molecule an average of two silicon-bonded aminoxy groups, (D) 1 to 100 mass parts of an ionic emulsifying agent, (E) 0.1 to 50 mass parts of a non-ionic emulsifying agent, and (F) 10 to 500 mass parts of water.




t

Data processing apparatus and method for controlling data processing apparatus

A data processing apparatus includes multiple processing means that are connected in a ring shape via corresponding communication means respectively. Each communication means includes a reception means for receiving data from a previous communication means, and a transmission means for transmitting data to a next communication means. Connection information is assigned to each of the reception means and the transmission means. The communication means, when receiving a packet that has same connection information as one assigned to its reception means, causes the corresponding processing means to perform data processing on the packet, sets the connection information assigned to its transmission means to the packet, and transmits the packet to the next communication means, and when receiving a packet that has connection information that is not same as one assigned to its reception means, transmits the packet to the next communication means without changing the connection information of the packet.




t

Interleaving data accesses issued in response to vector access instructions

A vector data access unit includes data access ordering circuitry, for issuing data access requests indicated by elements of earlier and a later vector instructions, one being a write instruction. An element indicating the next data access for each of the instructions is determined. The next data accesses for the earlier and the later instructions may be reordered. The next data access of the earlier instruction is selected if the position of the earlier instruction's next data element is less than or equal to the position of the later instruction's next data element minus a predetermined value. The next data access of the later instruction may be selected if the position of the earlier instruction's next data element is higher than the position of the later instruction's next data element minus a predetermined value. Thus data accesses from earlier and later instructions are partially interleaved.




t

Indirect designation of physical configuration number as logical configuration number based on correlation information, within parallel computing

A computing section is provided with a plurality of computing units and correlatively stores entries of configuration information that describes configurations of the plurality of computing units with physical configuration numbers that represent the entries of configuration information and executes a computation in a configuration corresponding to a designated physical configuration number. A status management section designates a physical configuration number corresponding to a status to which the computing section needs to advance the next time for the computing section and outputs the status to which the computing section needs to advance the next time as a logical status number that uniquely identifies the status to which the computing section needs to advance the next time in an object code. A determination section determines whether or not the computing section has stored an entry of configuration information corresponding to the status to which the computing section needs to advance the next time based on the logical status number that is output from the status management section. A rewriting section correlatively stores the entry of the configuration information and a physical configuration number corresponding to the entry of the configuration information in the computing section when the determination section determines that the computing section has not stored the entry of configuration information corresponding to the status to which the computing section needs to advance the next time.




t

Data processing device

A statue management section of a control section is provided with a corresponding real number storage section that stores a real number converted from a logical number by a configuration number converting section. When the corresponding real number storage section has stored configuration information with a real number of the next transition state, the state management section directly supplies the real number to the configuration information storage section in the next or later processing cycle.




t

Methods and apparatus for storing expanded width instructions in a VLIW memory for deferred execution

Techniques are described for decoupling fetching of an instruction stored in a main program memory from earliest execution of the instruction. An indirect execution method and program instructions to support such execution are addressed. In addition, an improved indirect deferred execution processor (DXP) VLIW architecture is described which supports a scalable array of memory centric processor elements that do not require local load and store units.




t

Low latency variable transfer network communicating variable written to source processing core variable register allocated to destination thread to destination processing core variable register allocated to source thread

A method and circuit arrangement utilize a low latency variable transfer network between the register files of multiple processing cores in a multi-core processor chip to support fine grained parallelism of virtual threads across multiple hardware threads. The communication of a variable over the variable transfer network may be initiated by a move from a local register in a register file of a source processing core to a variable register that is allocated to a destination hardware thread in a destination processing core, so that the destination hardware thread can then move the variable from the variable register to a local register in the destination processing core.




t

System for accessing a register file using an address retrieved from the register file

A data processing system and method are disclosed. The system comprises an instruction-fetch stage where an instruction is fetched and a specific instruction is input into decode stage; a decode stage where said specific instruction indicates that contents of a register in a register file are used as an index, and then, the register file pointed to by said index is accessed based on said index; an execution stage where an access result of said decode stage is received, and computations are implemented according to the access result of the decode stage.




t

Implementation of multi-tasking on a digital signal processor with a hardware stack

The disclosure relates to the implementation of multi-tasking on a digital signal processor. Blocking functions are arranged such that they do not make use of a processor's hardware stack. Respective function calls are replaced with a piece of inline assembly code, which instead performs a branch to the correct routine for carrying out said function. If a blocking condition of the blocking function is encountered, a task switch can be done to resume another task. Whilst the hardware stack is not used when a task switch might have to occur, mixed-up contents of the hardware stack among function calls performed by different tasks are avoided.




t

System and method for Controlling restarting of instruction fetching using speculative address computations

A system and method for controlling restarting of instruction fetching using speculative address computations in a processor are provided. The system includes a predicted target queue to hold branch prediction logic (BPL) generated target address values. The system also includes target selection logic including a recycle queue. The target selection logic selects a saved branch target value between a previously speculatively calculated branch target value from the recycle queue and an address value from the predicted target queue. The system further includes a compare block to identify a wrong target in response to a mismatch between the saved branch target value and a current calculated branch target, where instruction fetching is restarted in response to the wrong target.




t

Combined branch target and predicate prediction for instruction blocks

Embodiments provide methods, apparatus, systems, and computer readable media associated with predicting predicates and branch targets during execution of programs using combined branch target and predicate predictions. The predictions may be made using one or more prediction control flow graphs which represent predicates in instruction blocks and branches between blocks in a program. The prediction control flow graphs may be structured as trees such that each node in the graphs is associated with a predicate instruction, and each leaf associated with a branch target which jumps to another block. During execution of a block, a prediction generator may take a control point history and generate a prediction. Following the path suggested by the prediction through the tree, both predicate values and branch targets may be predicted. Other embodiments may be described and claimed.




t

Operand and limits optimization for binary translation system

Methods and systems for optimizing generation of natively executable code from non-native binary code are disclosed. One method includes receiving a source file including binary code configured for execution according to a non-native instruction set architecture. The method also includes translating one or more code blocks included in the executable binary code to source code, and applying an optimizing algorithm to instructions in the one or more code blocks. The optimizing algorithm is selected to reduce a number of memory address translations performed when translating the source code to native executable binary code, thereby resulting in one or more optimized code blocks. The method further includes compiling the source code to generate an output file comprising natively executable binary code including the one or more optimized code blocks.




t

APC model extension using existing APC models

A method of extending advanced process control (APC) models includes constructing an APC model table including APC model parameters of a plurality of products and a plurality of work stations. The APC model table includes empty cells and cells filled with existing APC model parameters. Average APC model parameters of the existing APC model parameters are calculated, and filled into the empty cells as initial values. An iterative calculation is performed to update the empty cells with updated values.




t

Executing machine instructions comprising input/output pairs of execution nodes

A computing machine is disclosed having a memory system for storing a collection of execution nodes, a head for reading a sequence of symbols in the execution nodes in the memory system, and writing a sequence of symbols in the memory system. The machine is configured to execute a computation with a collection of pairs of execution nodes. Each pair of execution nodes represents a machine instruction. One execution node in the pair represents input of the machine instruction represented by the execution nodes. Another execution node in the pair represents output of the machine instruction represented by the execution nodes. Each execution node has a state of the machine, a sequence of symbols and a number.




t

Detecting and reissuing of loop instructions in reorder structure

A processor for processing loop instructions can include an instruction reorder structure and a loop processing controller. The instruction reorder structure is configured to store decoded instructions according to program order and issue the decoded instructions for execution out of program order. The loop processing controller is configured to detect a loop in the decoded instructions stored in the instruction reorder structure and cause the instruction reorder structure to reissue the decoded instructions that form the loop for re-execution.




t

Method for activating processor cores within a computer system

A technique for activating processor cores within a computer system is disclosed. Initially, a value representing a number of processor cores to be enabled within the computer system is received. The computer system includes multiple processors, and each of the processors includes multiple processor cores. Next, a scale variable value representing a specific type of tasks to be optimized during an execution of the tasks within the computer system is received. From a pool of available processor cores within the computer system, a subset of processor cores can be selected for activation. The subset of processor cores is activated in order to achieve system optimization during an execution of the tasks.




t

Client-allocatable bandwidth pools

Methods and apparatus for client-allocatable bandwidth pools are disclosed. A system includes a plurality of resources of a provider network and a resource manager. In response to a determination to accept a bandwidth pool creation request from a client for a resource group, where the resource group comprises a plurality of resources allocated to the client, the resource manager stores an indication of a total network traffic rate limit of the resource group. In response to a bandwidth allocation request from the client to allocate a specified portion of the total network traffic rate limit to a particular resource of the resource group, the resource manager initiates one or more configuration changes to allow network transmissions within one or more network links of the provider network accessible from the particular resource at a rate up to the specified portion.




t

Method and device for passing parameters between processors

The disclosure provides a method for passing a parameter between processors. The method comprises the following steps: in a source program of a slave processor, directly introducing a static configuration parameter to be passed; obtaining a relative address of the static configuration parameter when converting the source program of the slave processor into a target program of the slave processor; and configuring directly, by a master processor, a parameter value of the static configuration parameter in the target program of the slave processor according to the obtained relative address of the static configuration parameter. The disclosure also provides a system for passing a parameter between processors. The system has no need to use external hardware such as a dual-port Random Access Memory (RAM) and a register, thus, the requirement of parameter transmission on the external hardware is reduced, and further the area and static power consumption of a chip are reduced. The disclosure reduces the cycle delay of the slave processor in accessing the dual-port RAM and the register, thereby effectively reducing the dynamic power consumption of the chip, improving the processing capability of the slave processor and enhancing the effective performance of the slave processor.




t

Information processing apparatus for restricting access to memory area of first program from second program

A processor determines whether a first program is under execution when a second program is executed, and changes a setting of a memory management unit based on access prohibition information so that a fault occurs when the second program makes an access to a memory when the first program is under execution. Then, the processor determines whether an access from the second program to a memory area used by the first program is permitted based on memory restriction information when the fault occurs while the first program and the second program are under execution, and changes the setting of the memory management unit so that the fault does not occur when the access to the memory area is permitted.




t

Active memory command engine and method

A command engine for an active memory receives high level tasks from a host and generates corresponding sets of either DCU commands to a DRAM control unit or ACU commands to a processing array control unit. The DCU commands include memory addresses, which are also generated by the command engine, and the ACU command include instruction memory addresses corresponding to an address in an array control unit where processing array instructions are stored.




t

Utilization of a microcode interpreter built in to a processor

Augmented processor hardware contains a microcode interpreter. When encrypted microcode is included in a message from a service, the microcode may be passed to the microcode interpreter. Based on decryption and execution of the microcode taking place at the processor hardware, extended functionality may be realized.




t

Instruction execution

A method of executing an instruction set including a first instruction and a second instruction, includes reading the first instruction; determining whether the first instruction is an instruction which is integral with the second instruction; reading the second instruction; if the first instruction is integral with the second instruction, interpreting the operand field of the second instruction to indicate at least one value to be used in conjunction with at least one bit of the first instruction; and if the first instruction is not integral with the second instruction, interpreting the operand field of the second instruction to indicate an entry of a look-up table.




t

Issue policy control within a multi-threaded in-order superscalar processor

A multi-threaded in-order superscalar processor 2 includes an issue stage 12 including issue circuitry 22, 24 for selecting instructions to be issued to execution units 14, 16 in dependence upon a currently selected issue policy. A plurality of different issue policies are provided by associated different policy circuitry 28, 30, 32 and a selection between which of these instances of the policy circuitry 28, 30, 32 is active is made by policy selecting circuitry 34 in dependence upon detected dynamic behavior of the processor 2.




t

Efficient conditional ALU instruction in read-port limited register file microprocessor

A microprocessor having performs an architectural instruction that instructs it to perform an operation on first and second source operands to generate a result and to write the result to a destination register only if its architectural condition flags satisfy a condition specified in the architectural instruction. A hardware instruction translator translates the instruction into first and second microinstructions. To execute the first microinstruction, an execution pipeline performs the operation on the source operands to generate the result. To execute the second microinstruction, it writes the destination register with the result generated by the first microinstruction if the architectural condition flags satisfy the condition, and writes the destination register with the current value of the destination register if the architectural condition flags do not satisfy the condition.




t

Recovering from an error in a fault tolerant computer system

A leading thread and a trailing thread are executed in parallel. Assuming that no transient fault occurs in each section, a system is speculatively executed in the section, with the leading thread and the trailing thread preferably being assigned to two different cores. At this time, the leading thread and the trailing thread are simultaneously executed, performing a buffering operation on a thread local area without performing a write operation on a shared memory. When the respective execution results of the two threads match each other, the content buffered to the thread local area is committed and written to the shared memory. When the respective execution results of the two threads do not match each other, the leading thread and the trailing thread are rolled back to a preceding commit point and re-executed.




t

Virtualization support for branch prediction logic enable / disable at hypervisor and guest operating system levels

A hypervisor and one or more guest operating systems resident in a data processing system and hosted by the hypervisor are configured to selectively enable or disable branch prediction logic through separate hypervisor-mode and guest-mode instructions. By doing so, different branch prediction strategies may be employed for different operating systems and user applications hosted thereby to provide finer grained optimization of the branch prediction logic for different operating scenarios.




t

Efficient parallel computation of dependency problems

A computing method includes accepting a definition of a computing task, which includes multiple Processing Elements (PEs) having execution dependencies. The computing task is compiled for concurrent execution on a multiprocessor device, by arranging the PEs in a series of two or more invocations of the multiprocessor device, including assigning the PEs to the invocations depending on the execution dependencies. The multiprocessor device is invoked to run software code that executes the series of the invocations, so as to produce a result of the computing task.




t

Multiprocessor system, multiprocessor control method, and multiprocessor integrated circuit

In a multiprocessor system, in general, a processor assigned with a larger amount of tasks is apt to perform a larger amount of communication with other processors assigned with tasks, than a processor assigned with a smaller amount of tasks. Thus in order for each processor to be able to perform the routing process efficiently, tasks are assigned such that, when there are a first processor and a second processor, the number of processors each assigned with one or more tasks and directly connected with the second processor being smaller than the number of processors each assigned with one or more tasks and directly connected with the first processor, the amount of tasks assigned to the first processor is equal to or larger than the amount of tasks assigned to the second processor.




t

Method for activating processor cores within a computer system

A method for activating processor cores within a computer system is disclosed. Initially, a value representing a number of processor cores to be enabled within the computer system is received. The computer system includes multiple processors, and each of the processors includes multiple processor cores. Next, a scale variable value representing a specific type of tasks to be optimized during an execution of the tasks within the computer system is received. From a pool of available processor cores within the computer system, a subset of processor cores can be selected for activation. The subset of processor cores is activated in order to achieve system optimization during an execution of the tasks.




t

Data accessing method for flash memory storage device having data perturbation module, and storage system and controller using the same

A data accessing method, and a storage system and a controller using the same are provided. The data accessing method is suitable for a flash memory storage system having a data perturbation module. The data accessing method includes receiving a read command from a host and obtaining a logical block to be read and a page to be read from the read command. The data accessing method also includes determining whether a physical block in a data area corresponding to the logical block to be read is a new block and transmitting a predetermined data to the host when the physical block corresponding to the logical block to be read is a new block. Thereby, the host is prevented from reading garbled code from the flash memory storage system having the data perturbation module.




t

High performance computing (HPC) node having a plurality of switch coupled processors

A High Performance Computing (HPC) node comprises a motherboard, a switch comprising eight or more ports integrated on the motherboard, and at least two processors operable to execute an HPC job, with each processor communicably coupled to the integrated switch and integrated on the motherboard.




t

Method and system for managing hardware resources to implement system functions using an adaptive computing architecture

An adaptable integrated circuit is disclosed having a plurality of heterogeneous computational elements coupled to an interconnection network. The interconnection network changes interconnections between the plurality of heterogeneous computational elements in response to configuration information. A first group of computational elements is allocated to form a first version of a functional unit to perform a first function by changing interconnections in the interconnection network between the first group of heterogeneous computational elements. A second group of computational elements is allocated to form a second version of a functional unit to perform the first function by changing interconnections in the interconnection network between the second group of heterogeneous computational elements. One or more of the first or second group of heterogeneous computational elements are reallocated to perform a second function by changing the interconnections between the one or more of the first or second group of heterogeneous computational elements.




t

Data processing method and apparatus for prefetching

A data processing device includes processing circuitry 20 for executing a first memory access instruction to a first address of a memory device 40 and a second memory access instruction to a second address of the memory device 40, the first address being different from the second address. The data processing device also includes prefetching circuitry 30 for prefetching data from the memory device 40 based on a stride length 70 and instruction analysis circuitry 50 for determining a difference between the first address and the second address. Stride refining circuitry 60 is also provided to refine the stride length based on factors of the stride length and factors of the difference calculated by the instruction analysis circuitry 50.




t

Shared load-store unit to monitor network activity and external memory transaction status for thread switching

An array of a plurality of processing elements (PEs) are in a data packet-switched network interconnecting the PEs and memory to enable any of the PEs to access the memory. The network connects the PEs and their local memories to a common controller. The common controller may include a shared load/store (SLS) unit and an array control unit. A shared read may be addressed to an external device via the common controller. The SLS unit can continue activity as if a normal shared read operation has taken place, except that the transactions that have been sent externally may take more cycles to complete than the local shared reads. Hence, a number of transaction-enabled flags may not have been deactivated even though there is no more bus activity. The SLS unit can use this state to indicate to the array control unit that a thread switch may now take place.




t

Hardware assist thread for increasing code parallelism

Mechanisms are provided for offloading a workload from a main thread to an assist thread. The mechanisms receive, in a fetch unit of a processor of the data processing system, a branch-to-assist-thread instruction of a main thread. The branch-to-assist-thread instruction informs hardware of the processor to look for an already spawned idle thread to be used as an assist thread. Hardware implemented pervasive thread control logic determines if one or more already spawned idle threads are available for use as an assist thread. The hardware implemented pervasive thread control logic selects an idle thread from the one or more already spawned idle threads if it is determined that one or more already spawned idle threads are available for use as an assist thread, to thereby provide the assist thread. In addition, the hardware implemented pervasive thread control logic offloads a portion of a workload of the main thread to the assist thread.




t

Multiprocessor messaging system

A multiprocessor system includes a first microprocessor and a second microprocessor. A first signaling pathway is configured to send message transmission coordination signals from the first microprocessor to the second microprocessor. The first signaling pathway may be coupled to at least two flag registers associated with the second microprocessor. A second signaling pathway is configured to send message transmission coordination signals from the second microprocessor to the first microprocessor. The second signaling pathway may be coupled to at least two flag registers associated with the first microprocessor. The first signaling pathway is independent of the second signaling pathway.




t

Data mover moving data to accelerator for processing and returning result data based on instruction received from a processor utilizing software and hardware interrupts

Efficient data processing apparatus and methods include hardware components which are pre-programmed by software. Each hardware component triggers the other to complete its tasks. After the final pre-programmed hardware task is complete, the hardware component issues a software interrupt.




t

System, method and computer program product for recursively executing a process control operation to use an ordered list of tags to initiate corresponding functional operations

In accordance with embodiments, there are provided mechanisms and methods for controlling a process using a process map. These mechanisms and methods for controlling a process using a process map can enable process operations to execute in order without necessarily having knowledge of one another. The ability to provide the process map can avoid a requirement that the operations themselves be programmed to follow a particular sequence, as can further improve the ease by which the sequence of operations may be changed.




t

Debug in a multicore architecture

A method of monitoring thread execution within a multicore processor architecture which comprises a plurality of interconnected processor elements for processing the threads, the method comprising receiving a plurality of thread parameter indicators of one or more parameters relating to the function and/or identity and/or execution location of a thread or threads, comparing at least one of the thread parameter indicators with a first plurality of predefined criteria each representative of an indicator of interest, and generating an output consequential upon thread parameter indicators which have been identified to be of interest as a result of the said comparison.




t

System and method for communicating with sensors/loggers in integrated radio frequency identification (RFID) tags

A system and method is disclosed for communicating with sensors/loggers in integrated radio frequency identification (RFID) tags. An RFID reader uses a Communicate With Data Logger Command to communicate with a data logger in an RFID tag. The RFID reader performs data access processes using an Index Register and a Data Register of the RFID tag. The RFID reader selects one of (1) Index Read access (2) Index Write access (3) Data Write access (4) Data Read access with parity and (5) Data Read access with cyclic redundancy check (CRC). The RFID tag performs the requested data access and then performs an error detection process.




t

Reception according to a data transfer protocol of data directed to any of a plurality of destination entities

A data processing system arranged for receiving over a network, according to a data transfer protocol, data directed to any of a plurality of destination identities, the data processing system comprising: data storage for storing data received over the network; and a first processing arrangement for performing processing in accordance with the data transfer protocol on received data in the data storage, for making the received data available to respective destination identities; and a response former arranged for: receiving a message requesting a response indicating the availability of received data to each of a group of destination identities; and forming such a response; wherein the system is arranged to, in dependence on receiving the said message.