ng

Vertex array access bounds checking

Aspects of the invention relate generally to validating array bounds in an API emulator. More specifically, an OpenGL (or OpenGL ES) emulator may examine each array accessed by a 3D graphic program. If the program requests information outside of an array, the emulator may return an error when the graphic is drawn. However, when the user (here, a programmer) queries the value of the array, the correct value (or the value provided by the programmer) may be returned. In another example, the emulator may examine index buffers which contain the indices of the elements on the other arrays to access. If the program requests a value which is not within the range, the emulator may return an error when the graphic is drawn. Again, when the programmer queries the value of the array, the correct value (or the value provided by the programmer) may be returned.




ng

Data storage device and operating method thereof

A data storage device includes a first memory device configured to store data having a first property, a second memory device configured to store data having a second property, and a controller. The controller selects data stored in the first memory device, and transfers the selected data to the second memory device or stores the selected data in another physical location of the first memory device selectively depending on an update count (UC) of an address at which the selected data is stored.




ng

Multipass programming in buffers implemented in non-volatile data storage systems

The various implementations described herein include systems, methods and/or devices used to enable multipass programming in buffers implemented in non-volatile data storage systems (e.g., using one or more flash memory devices). In one aspect, a portion of memory (e.g., a page in a block of a flash memory device) may be programmed many (e.g., 1000) times before an erase is required. Some embodiments include systems, methods and/or devices to integrate Bloom filter functionality in a non-volatile data storage system, where a portion of memory storing one or more bits of a Bloom filter array may be programmed many (e.g., 1000) times before the contents of the portion of memory need to be moved to an unused location in the memory.




ng

Method and apparatus for calibrating a memory interface with a number of data patterns

Apparatuses and methods of calibrating a memory interface are described. Calibrating a memory interface can include loading and outputting units of a first data pattern into and from at least a portion of a register to generate a first read capture window. Units of a second data pattern can be loaded into and output from at least the portion of the register to generate a second read capture window. One of the first read capture window and the second read capture window can be selected and a data capture point for the memory interface can be calibrated according to the selected read capture window.




ng

System and method to process event reporting in an adapter

Method and system for an adapter is provided. The adapter includes a plurality of function hierarchies, with each function hierarchy including a plurality of functions and each function being associated with an event. The adapter also includes a plurality of processors for processing one or more events generated by the plurality of functions. The adapter further includes a first set of arbitration modules, where each arbitration module is associated with a function hierarchy and receives interrupt signals from the functions within the associated function hierarchy and selects one of the interrupt signals. The adapter also includes a second set of arbitration modules, where each arbitration module receives processor specific interrupt signals and selects one of the interrupt signals for processing an event associated with the selected interrupt signal.




ng

Technique for communicating interrupts in a computer system

A technique to enable efficient interrupt communication within a computer system. In one embodiment, an advanced programmable interrupt controller (APIC) is interfaced via a set of bits within an APIC interface register using various interface instructions or operations, without using memory-mapped input/output (MMIO).




ng

Handling interrupts in a multi-processor system

A data processing apparatus has a plurality of processors and a plurality of interrupt interfaces each for handling interrupt requests from a corresponding processor. An interrupt distributor controls routing of interrupt requests to the interrupt interfaces. A shared interrupt request is serviceable by multiple processors. In response to the shared interrupt request, a target interrupt interface issues an interrupt ownership request to the interrupt distributor, without passing the shared interrupt request to the corresponding processor, if it estimates that the corresponding processor is available for servicing the shared interrupt request. The shared interrupt request is passed to the corresponding processor when an ownership confirmation is received from the interrupt distributor indicating that the processor has been selected for servicing the shared interrupt request.




ng

Dongle device with video encoding and methods for use therewith

A universal serial bus (USB) dongle device includes a USB interface that receives selection data from a host device that indicates a selection of a first video format from a plurality of available formats. The USB interface also receives an input video signal from the host device in the first video format and a power signal from the host device. An encoding module generates a processed video signal in a second video format based on the input video signal, wherein the first video format differs from the second video format. The USB interface transfers the processed video signal to the host device.




ng

Information processing apparatus, method thereof, and storage medium

An information processing apparatus includes a plurality of modules connected in a ring shape via a bus, and each module processes a packet flowing in a single direction on the ring in a predetermined order. The module includes a communication unit for transmitting a packet received from a first direction in the ring via the bus to a second direction, a discrimination unit for discriminating a packet from among the packets received from the first direction as a processing packet to be processed by the module, and a processing unit which is connected with the communication unit one by one and configured to process the processing packet. The communication unit transmits the packet processed by the processing unit at an interval equivalent to processing time or more for a processing packet processed by a module in a latter stage in the predetermined order among packets transmitted by the communication unit to the second direction.




ng

Optimizing a rate of transfer of data between an RF generator and a host system within a plasma tool

A bus interconnect interfaces a host system to a radio frequency (RF) generator that is coupled to a plasma chamber. The bus interconnect includes a first set of host ports, which are used to provide a power component setting and a frequency component setting to the RF generator. The ports of the first set of host ports are used to receive distinct variables that change over time. The bus interconnect further includes a second set of generator ports used to send a power read back value and a frequency read back value to the host system. The bus interconnect includes a sampler circuit integrated with the host system. The sampler circuit is configured to sample signals at the ports of the first set at selected clock edges to capture operating state data of the plasma chamber and the RF generator.




ng

Versatile lane configuration using a PCIe PIe-8 interface

Each PCIe device may include a media access control (MAC) interface and a physical (PHY) interface that support a plurality of different lane configurations. These interfaces may include hardware modules that support 1×32, 2×16, 4×8, 8×4, 16×2, and 32×1 communication. Instead of physically connecting each of the hardware modules in the MAC interface to respective hardware modules in the PHY interface using dedicated traces, the device may include two bus controllers that arbitrate which hardware modules are connected to a internal bus coupling the two interfaces. When a different lane configuration is desired, the bus controller couples the corresponding hardware module to the internal bus. In this manner, the different lane configurations share the same lanes (and wires) of the bus as the other lane configurations. Accordingly, the shared bus only needs to include enough lanes (and wires) necessary to accommodate the widest lane configuration.




ng

Method to facilitate fast context switching for partial and extended path extension to remote expanders

A method, apparatus, and system for switching from an existing target end device to a next target end device in a multi-expander storage topology by using Fast Context Switching. The method enhances Fast Context Switching by allowing Fast Context Switching to reuse or extend part of an existing connection path to an end device directly attached to a remote expander. The method can include reusing or extending at least a partial path of an established connection between an initiator and the existing target end device for a connection between the initiator and the next target end device, whereby the existing target end device and the next target end device are locally attached to different expanders.




ng

System and method for detecting accidental peripheral device disconnection

A detection device for detecting the manner in which a peripheral device is removed from an electronic device is proposed. The detection device can be on the peripheral device or the electronic device and detects whether the peripheral device was removed in a manner that indicates the removal was intentional or unintentional.




ng

Apparatuses enabling concurrent communication between an interface die and a plurality of dice stacks, interleaved conductive paths in stacked devices, and methods for forming and operating the same

Various embodiments include apparatuses, stacked devices and methods of forming dice stacks on an interface die. In one such apparatus, a dice stack includes at least a first die and a second die, and conductive paths coupling the first die and the second die to the common control die. In some embodiments, the conductive paths may be arranged to connect with circuitry on alternating dice of the stack. In other embodiments, a plurality of dice stacks may be arranged on a single interface die, and some or all of the dice may have interleaving conductive paths.




ng

Electronic devices and methods for sharing peripheral devices in dual operating systems

A method for sharing peripheral devices in dual operating systems for an electronic device having at least one peripheral device is provided. The method includes: receiving a setting value for the peripheral device under the first operating system from a user; activating a second operating system; transmitting the setting value to the second operating system; and switching from the first operating system to the second operating system, wherein the second operating system sets the peripheral device with the setting value and enables the electronic device to operate under the second operating system.




ng

System and method for a workload management and scheduling module to manage access to a compute environment according to local and non-local user identity information

A system, method and computer-readable media for managing a compute environment are disclosed. The method includes importing identity information from an identity manager into a module performs workload management and scheduling for a compute environment and, unless a conflict exists, modifying the behavior of the workload management and scheduling module to incorporate the imported identity information such that access to and use of the compute environment occurs according to the imported identity information. The compute environment may be a cluster or a grid wherein multiple compute environments communicate with multiple identity managers.




ng

Reducing cross queue synchronization on systems with low memory latency across distributed processing nodes

A method for efficient dispatch/completion of a work element within a multi-node data processing system. The method comprises: selecting specific processing units from among the processing nodes to complete execution of a work element that has multiple individual work items that may be independently executed by different ones of the processing units; generating an allocated processor unit (APU) bit mask that identifies at least one of the processing units that has been selected; placing the work element in a first entry of a global command queue (GCQ); associating the APU mask with the work element in the GCQ; and responsive to receipt at the GCQ of work requests from each of the multiple processing nodes or the processing units, enabling only the selected specific ones of the processing nodes or the processing units to be able to retrieve work from the work element in the GCQ.




ng

Method and system for heterogeneous filtering framework for shared memory data access hazard reports

A system and method for detecting, filtering, prioritizing and reporting shared memory hazards are disclosed. The method includes, for a unit of hardware operating on a block of threads, mapping a plurality of shared memory locations assigned to the unit to a tracking table. The tracking table comprises initialization information for each shared memory location. The method also includes, for an instruction of a program within a barrier region, identifying a potential conflict by identifying a second access to a location in shared memory within a block of threads executed by the hardware unit. First information associated with a first access and second information associated with the second access to the location is determined. Filter criteria is applied to the first and second information to determine whether the instruction causes a reportable hazard. The instruction is reported when it causes the reportable hazard.




ng

Computing job management based on priority and quota

In one embodiment, the invention provides a method of managing a computing job based on a job priority and a submitter quota.




ng

Virtual machine provisioning based on tagged physical resources in a cloud computing environment

A cloud system may create physical resource tags to store relationships between cloud computing offerings, such as computing service offerings, storage offerings, and network offerings, and the specific physical resources in the cloud computing environment. Cloud computing offerings may be presented to cloud customers, the offerings corresponding to various combinations of computing services, storage, networking, and other hardware or software resources. After a customer selects one or more cloud computing offerings, a cloud resource manager or other component within the cloud infrastructure may retrieve a set of tags and determine a set of physical hardware resources associated with the selected offerings. The physical hardware resources associated with the selected offerings may be subsequently used to provision and create the new virtual machine and its operating environment.




ng

Managing utilization of physical processors of a shared processor pool in a virtualized processor environment

Systems, methods and computer program products may provide managing utilization of one or more physical processors in a shared processor pool. A method of managing utilization of one or more physical processors in a shared processor pool may include determining a current amount of utilization of the one or more physical processors and generating an instruction message. The instruction message may be at least partially determined by the current amount of utilization. The method may further include sending the instruction message to a guest operating system, the guest operating system having a number of enabled virtual processors.




ng

Fence elision for work stealing

Methods and systems for statistically eliding fences in a work stealing algorithm are disclosed. A data structure comprising a head pointer, tail pointer, barrier pointer and an advertising flag allows for dynamic load-balancing across processing resources in computer applications.




ng

Method and apparatus for generating metadata for digital content

A method and an apparatus for generating metadata for digital content are described, which allow to review the generated metadata already in course of ongoing generation of metadata. The metadata generation is split into a plurality of processing tasks, which are allocated to two or more processing nodes. The metadata generated by the two or more processing nodes is gathered and visualized on an output unit.




ng

System and method for managing mainframe computer system usage

In mainframe computer system, workload tasks are accomplished using a logically partitioned data processing system, where the partitioned data processing system is divided into multiple logical partitions. In a system and method managing such a computer system, each running workload tasks that can be classified based on time criticality, and groups of logical partitions can be freely defined. Processing capacity limits for the logical partitions in a group of logical partitions based upon defined processing capacity thresholds and upon an iterative determination of how much capacity is needed for time critical workload tasks. Workload can be balanced between logical partitions within a group, to prevent surplus processing capacity being used to run not time critical workload on one logical partition when another logical partition running only time critical workload tasks faces processing deficit.




ng

Resisting the spread of unwanted code and data

A method of processing an electronic file by identifying portions of content data in the electronic file and determining if each portion of content data is passive content data having a fixed purpose or active content data having an associated function. If a portion is passive content data, then a determination is made as to whether the portion of passive content data is to be re-generated. If a portion is active content data, then the portion is analyzed to determine whether the portion of active content data is to be re-generated. A re-generated electronic file is then created from the portions of content data which are determined to be re-generated.




ng

System and method for below-operating system trapping and securing loading of code into memory

A system for protecting an electronic device against malware includes a memory, an operating system configured to execute on the electronic device, and a below-operating-system security agent. The below-operating-system security agent is configured to trap an attempted access of a resource of the electronic device, access one or more security rules to determine whether the attempted access is indicative of malware, and operate at a level below all of the operating systems of the electronic device accessing the memory. The attempted access includes attempting to write instructions to the memory and attempting to execute the instructions.




ng

System and method for performing memory management using hardware transactions

The systems and methods described herein may be used to implement a shared dynamic-sized data structure using hardware transactional memory to simplify and/or improve memory management of the data structure. An application (or thread thereof) may indicate (or register) the intended use of an element of the data structure and may initialize the value of the data structure element. Thereafter, another thread or application may use hardware transactions to access the data structure element while confirming that the data structure element is still part of the dynamic data structure and/or that memory allocated to the data structure element has not been freed. Various indicators may be used determine whether memory allocated to the element can be freed.




ng

Managing safe removal of a passthrough device in a virtualization system

Methods and systems for managing a removal of a passthrough device from a guest managed by a hypervisor in virtualized computing environment. A hypervisor receives a request from the guest for access to a passthrough device. The hypervisor sets, in a memory, a last accessed state associated with a virtual machine executing the guest. The hypervisor forwards the request to the passthrough device and configures the host CPU to send a subsequent access request directly to the passthrough device. In response to a virtual machine reset, the hypervisor clears the last accessed state and instructs the host CPU to send a post-reset access request to the hypervisor.




ng

Virtualization and dynamic resource allocation aware storage level reordering

A system and method for reordering storage levels in a virtualized environment includes identifying a virtual machine (VM) to be transitioned and determining a new storage level order for the VM. The new storage level order reduces a VM live state during a transition, and accounts for hierarchical shared storage memory and criteria imposed by an application to reduce recovery operations after dynamic resource allocation actions. The new storage level order recommendation is propagated to VMs. The new storage level order applied in the VMs. A different storage-level order is recommended after the transition.




ng

Method and system for providing storage services

Method and system are provided for managing components of a storage operating environment having a plurality of virtual machines that can access a storage device managed by a storage system. The virtual machines are executed by a host platform that also executes a processor-executable host services module that interfaces with at least a processor-executable plug-in module for providing information regarding the virtual machines and assists in storage related services, for example, replicating the virtual machines.




ng

Apparatus and methods for adaptive thread scheduling on asymmetric multiprocessor

Techniques for adaptive thread scheduling on a plurality of cores for reducing system energy are described. In one embodiment, a thread scheduler receives leakage current information associated with the plurality of cores. The leakage current information is employed to schedule a thread on one of the plurality of cores to reduce system energy usage. On chip calibration of the sensors is also described.




ng

Using pause on an electronic device to manage resources

An electronic device for using pause to manage resources is described. The electronic device includes a processor and instructions stored in memory. The electronic device monitors a pause duration and determines whether to perform a resource management operation based on the pause duration. The electronic device performs the resource management operation based on the pause duration.




ng

Remediating gaps between usage allocation of hardware resource and capacity allocation of hardware resource

A usage allocation of a hardware resource to each of a number of workloads over time is determined using a demand model. The usage allocation of the resource includes a current and past actual usage allocation of the resource, a future projected usage allocation of the resource, and current and past actual usage of the resource. A capacity allocation of the resource is determined using a capacity model. The capacity allocation of the resource includes a current and past capacity and a future projected capacity of the resource. Whether a gap exists between the usage allocation and the capacity allocation is determined using a mapping model. Where the gap exists between the usage allocation of the resource and the capacity allocation of the resource, a user is presented with options determined using the mapping model and selectable by the user to implement a remediation strategy to close the gap.




ng

Managing access to a shared resource by tracking active requestor job requests

The technology of the present application provides a networked computer system with at least one workstation and at least one shared resource such as a database. Access to the database by the workstation is managed by a database management system. An access engine reviews job requests for access to the database and allows job requests access to the resource based protocols stored by the system.




ng

Two-tiered dynamic load balancing using sets of distributed thread pools

By employing a two-tier load balancing scheme, embodiments of the present invention may reduce the overhead of shared resource management, while increasing the potential aggregate throughput of a thread pool. As a result, the techniques presented herein may lead to increased performance in many computing environments, such as graphics intensive gaming.




ng

Converting dependency relationship information representing task border edges to generate a parallel program

According to an embodiment, based on task border information, and first-type dependency relationship information containing N number of nodes corresponding to data accesses to one set of data, containing edges representing dependency relationship between the nodes, and having at least one node with an access reliability flag indicating reliability/unreliability of corresponding data access; task border edges, of edges extending over task borders, are identified that have an unreliable access node linked to at least one end, and presentation information containing unreliable access nodes is generated. According to dependency existence information input corresponding to the set of data, conversion information indicating absence of data access to the unreliable access nodes is output. According to the conversion information, the first-type dependency relationship information is converted into second-type dependency relationship information containing M number of nodes (0≦M≦N) corresponding to data accesses to the set of data and containing edges representing inter-node dependency relationship.




ng

Information processing device and task switching method

Disclosed is an information processing device and a task switching method that can reduce the time required for switching of tasks in a plurality of coprocessors. The information processing device (30) includes a processor core (301); coprocessors (311 to 31n) including operation units (321 to 32n) that perform operation in response to a request from the processor core (301) and operation storage units (331 to 22n) that store the contents of operation of the operation units (321 to 32n), save storage units (351 to 35n) that store the saved contents of operation, a task switching control unit (302) that outputs a save/restore request signal when switching a task on which operation is performed by the coprocessors (311 to 31n), and save/restore units (341 to 34n) that perform at least one of saving of the contents of operation in the operation storage units (331 to 33n) to the save storage units (351 to 35n) and restoration of the contents of operation in the save storage units (351 to 35 n) to the operation storage units (331 to 33n) in response to the save/restore request signal.




ng

Method and apparatus for continuously producing 1,1,1,2,3-pentafluoropropane with high yield

A method and apparatus for method of continuously producing 1,1,1,2,3-pentafluoropropane with high yield is provided. The method includes (a) bringing a CoF3-containing cobalt fluoride in a reactor into contact with 3,3,3-trifluoropropene to produce a CoF2-containing cobalt fluoride and 1,1,1,2,3-pentafluoropropane, (b) transferring the CoF2-containing cobalt fluoride in the reactor to a regenerator and bringing the transferred CoF2-containing cobalt fluoride into contact with fluorine gas to regenerate a CoF3-containing cobalt fluoride, and (c) transferring the CoF3-containing cobalt fluoride in the regenerator to the reactor and employing the transferred CoF3-containing cobalt fluoride in Operation (a). Accordingly, the 1,1,1,2,3-pentafluoropropane can be continuously produced with high yield from the 3,3,3-trifluoropropene using a cobalt fluoride (CoF2/CoF3) as a fluid catalyst, thereby improving the reaction stability and readily adjusting the optimum conversion rate and selectivity.




ng

Liquid crystal compound having fluorovinyl group, liquid crystal composition and liquid crystal display device

A liquid crystal compound having a high stability to heat, light and so forth, a high clearing point, a low minimum temperature of a liquid crystal phase, a small viscosity, a suitable optical anisotropy, a large dielectric anisotropy, a suitable elastic constant and an excellent solubility in other liquid crystal compounds, a liquid crystal composition containing the compound, and a liquid crystal display device including the composition. The compound is represented by formula (1): wherein, for example, R1 is fluorine or alkyl having 1 to 10 carbons; ring A1 and ring A2 are 1,4-phenylene, or 1,4-phenylene in which at least one of hydrogen is replaced by fluorine; Z1, Z2 and Z3 are a single bond; L1 and L2 are hydrogen or fluorine; X1 is fluorine or —CF3; and m is 1, and n is 0.




ng

Organic compound and organic light-emitting device

A novel organic compound suitable for emitting green light and an organic light-emitting device including the organic compound are provided. The organic compound is represented by general formula (1). In general formula (1), R1 to R18 are each independently selected from a hydrogen atom, a halogen atom, a substituted or unsubstituted alkyl group, a substituted or unsubstituted alkoxy group, a substituted amino group, a substituted or unsubstituted aryl group, and a substituted or unsubstituted heterocyclic group.




ng

Methods of preparing para-xylene from biomass

Methods or preparing para-xylene from biomass by carrying out a Diels-Alder cycloaddition at controlled temperatures and activity ratios. Methods of preparing bio-terephthalic acid and bio-poly(ethylene terephthalate (bio-PET) are also disclosed, as well as products formed from bio-PET.




ng

Process for producing 2,3,3,3-tetrafluoropropene

The instant invention relates to a process and method for manufacturing 2,3,3,3-tetrafluoropropene by dehydrohalogenating a reactant stream of 2-chloro-1,1,1,2-tetrafluoropropane that is substantially free from impurities, particularly halogenated propanes, propenes, and propynes.




ng

Method for producing pentafluoroethane

The present invention aims in a method wherein tetrachloroethylene (PCE) is reacted with HF in a gas phase in the presence of a catalyst to obtain pentafluoroethane (HFC-125), to reduce production of undesirable by-products and maintain a catalytic activity at a high level over a long period of time while achieving a high conversion ratio of PCE and suppressing deterioration of the catalyst. In a method for producing pentafluoroethane wherein tetrachloroethylene is reacted with HF in a gas phase in the presence of a catalyst to obtain pentafluoroethane, characterized in that chromium oxyfluoride is disposed in a reactor as the catalyst, and oxygen is fed into the reactor together with tetrachloroethylene and HF, at a amount of 0.4-1.8% by mole with respect to tetrachloroethylene.




ng

Method for purifying 2,3,3,3-tetrafluoropropene

The present invention provides a method for purifying HFO-1234yf, comprising the steps of (1) cooling a liquid mixture containing HFO-1234yf and HF to separate the mixture into a upper liquid phase having a high concentration of HF and a lower liquid phase having a high concentration of 2,3,3,3-tetrafluoropropene; and (2) subjecting the lower liquid phase obtained in step (1) to a distillation operation to withdraw a mixture containing HFO-1234yf and HF from a top of a distillation column, thereby obtaining substantially HF-free HFO-1234yf from a bottom of the distillation column. According to the present invention, HF and HFO-1234yf contained in a mixture containing HF and HFO-1234yf can be separated under simple and economically advantageous conditions.




ng

Process for producing 1,2-dichloro-3,3,3-trifluoropropene

Disclosed is a process for producing 1,2-dichloro-3,3,3-trifluoropropene, which is characterized by that 1-halogeno-3,3,3-trifluoropropene represented by the general formula [1]: (In the formula, X represents a fluorine atom, chlorine atom or bromine atom.) is reacted with chlorine in a gas phase in the presence of a catalyst. It is possible by this process to produce 1,2-dichloro-3,3,3-trifluoropropene in an industrial scale with good yield by using 1-halogeno-3,3,3-trifluoropropene, which is available with a low price, as the raw material.




ng

Liquid crystal compound having perfluoroalkyl chain, and liquid crystal composition and liquid crystal display device

The invention is to provide a new liquid crystal compound having a high clearing point, a good compatibility with other compounds, a small viscosity, and a high stability to heat, light and so forth; compound (1) is provided: R1CF2nR2 (1) wherein, for example, R1 is alkyl having 4 to 10 carbons or —(CH2)2—CH═CH2, R2 is alkyl having 2 to 10 carbons, n is 8, and R1 and R2 are not allowed to be straight-chain alkyl having an identical number of carbons.




ng

Methods for producing 1-chloro-3,3,3-trifluoropropene from 2-chloro-3,3,3-trifluoropropene

The present invention provides processes for the production of HCFO-1233zd, 1-chloro-3,3,3-trifluoropropene, from the starting material, 2-chloro-3,3,3-trifluoropropene (HCFO-1233xf). In a first process, HCFO-1233zd is produced by the isomerization of HCFO-1233xf. In a second process, HCFO-1233zd is produced in a two-step procedure which includes (i) dehydrochlorination of HCFO-1233xf into trifluoropropyne; and (ii) hydrochlorination of the trifluoropropyne into HCFO-1233zd.




ng

Process for separating chlorinated methanes

The present invention relates to a process for separating chlorinated methanes utilizing a dividing wall column. Processes and manufacturing assemblies for generating chlorinated methanes are also provided, as are processes for producing products utilizing the chlorinated methanes produced and/or separated utilizing the present process(es) and/or assemblies.




ng

Process for purifying (hydro) fluoroalkenes

The invention relates to a process for removing one or more undesired (hydro)halocarbon compounds from a (hydro)fluoroalkene, the process comprising contacting a composition comprising the (hydro)fluoroalkene and one or more undesired (hydro)halocarbon compounds with an aluminum-containing absorbent, activated carbon, or a mixture thereof.




ng

Reactor and agitator useful in a process for making 1-chloro-3,3,3-trifluoropropene

Disclosed is a reactor and agitator useful in a high pressure process for making 1-chloro-3,3,3-trifluoropropene (1233zd) from the reaction of 1,1,1,3,3-pentachloropropane (240fa) and HF, wherein the agitator includes one or more of the following design improvements: (a) double mechanical seals with an inert barrier fluid or a single seal;(b) ceramics on the rotating faces of the seal;(c) ceramics on the static faces of seal;(d) wetted o-rings constructed of spring-energized Teflon and PTFE wedge or dynamic o-ring designs; and(e) wetted metal surfaces of the agitator constructed of a corrosion resistant alloy.