rre

Concurrency & Multithreading in iOS

Concurrency is the notion of multiple things happening at the same time. This is generally achieved either via time-slicing, or truly in parallel if multiple CPU cores are available to the host operating system. We've all experienced a lack of concurrency, most likely in the form of an app freezing up when running a heavy task. UI freezes don't necessarily occur due to the absence of concurrency — they could just be symptoms of buggy software — but software that doesn't take advantage of all the computational power at its disposal is going to create these freezes whenever it needs to do something resource-intensive. If you've profiled an app hanging in this way, you'll probably see a report that looks like this:

Anything related to file I/O, data processing, or networking usually warrants a background task (unless you have a very compelling excuse to halt the entire program). There aren't many reasons that these tasks should block your user from interacting with the rest of your application. Consider how much better the user experience of your app could be if instead, the profiler reported something like this:

Analyzing an image, processing a document or a piece of audio, or writing a sizeable chunk of data to disk are examples of tasks that could benefit greatly from being delegated to background threads. Let's dig into how we can enforce such behavior into our iOS applications.


A Brief History

In the olden days, the maximum amount of work per CPU cycle that a computer could perform was determined by the clock speed. As processor designs became more compact, heat and physical constraints started becoming limiting factors for higher clock speeds. Consequentially, chip manufacturers started adding additional processor cores on each chip in order to increase total performance. By increasing the number of cores, a single chip could execute more CPU instructions per cycle without increasing its speed, size, or thermal output. There's just one problem...

How can we take advantage of these extra cores? Multithreading.

Multithreading is an implementation handled by the host operating system to allow the creation and usage of n amount of threads. Its main purpose is to provide simultaneous execution of two or more parts of a program to utilize all available CPU time. Multithreading is a powerful technique to have in a programmer's toolbelt, but it comes with its own set of responsibilities. A common misconception is that multithreading requires a multi-core processor, but this isn't the case — single-core CPUs are perfectly capable of working on many threads, but we'll take a look in a bit as to why threading is a problem in the first place. Before we dive in, let's look at the nuances of what concurrency and parallelism mean using a simple diagram:

In the first situation presented above, we observe that tasks can run concurrently, but not in parallel. This is similar to having multiple conversations in a chatroom, and interleaving (context-switching) between them, but never truly conversing with two people at the same time. This is what we call concurrency. It is the illusion of multiple things happening at the same time when in reality, they're switching very quickly. Concurrency is about dealing with lots of things at the same time. Contrast this with the parallelism model, in which both tasks run simultaneously. Both execution models exhibit multithreading, which is the involvement of multiple threads working towards one common goal. Multithreading is a generalized technique for introducing a combination of concurrency and parallelism into your program.


The Burden of Threads

A modern multitasking operating system like iOS has hundreds of programs (or processes) running at any given moment. However, most of these programs are either system daemons or background processes that have very low memory footprint, so what is really needed is a way for individual applications to make use of the extra cores available. An application (process) can have many threads (sub-processes) operating on shared memory. Our goal is to be able to control these threads and use them to our advantage.

Historically, introducing concurrency to an app has required the creation of one or more threads. Threads are low-level constructs that need to be managed manually. A quick skim through Apple's Threaded Programming Guide is all it takes to see how much complexity threaded code adds to a codebase. In addition to building an app, the developer has to:

  • Responsibly create new threads, adjusting that number dynamically as system conditions change
  • Manage them carefully, deallocating them from memory once they have finished executing
  • Leverage synchronization mechanisms like mutexes, locks, and semaphores to orchestrate resource access between threads, adding even more overhead to application code
  • Mitigate risks associated with coding an application that assumes most of the costs associated with creating and maintaining any threads it uses, and not the host OS

This is unfortunate, as it adds enormous levels of complexity and risk without any guarantees of improved performance.


Grand Central Dispatch

iOS takes an asynchronous approach to solving the concurrency problem of managing threads. Asynchronous functions are common in most programming environments, and are often used to initiate tasks that might take a long time, like reading a file from the disk, or downloading a file from the web. When invoked, an asynchronous function executes some work behind the scenes to start a background task, but returns immediately, regardless of how long the original task might takes to actually complete.

A core technology that iOS provides for starting tasks asynchronously is Grand Central Dispatch (or GCD for short). GCD abstracts away thread management code and moves it down to the system level, exposing a light API to define tasks and execute them on an appropriate dispatch queue. GCD takes care of all thread management and scheduling, providing a holistic approach to task management and execution, while also providing better efficiency than traditional threads.

Let's take a look at the main components of GCD:

What've we got here? Let's start from the left:

  • DispatchQueue.main: The main thread, or the UI thread, is backed by a single serial queue. All tasks are executed in succession, so it is guaranteed that the order of execution is preserved. It is crucial that you ensure all UI updates are designated to this queue, and that you never run any blocking tasks on it. We want to ensure that the app's run loop (called CFRunLoop) is never blocked in order to maintain the highest framerate. Subsequently, the main queue has the highest priority, and any tasks pushed onto this queue will get executed immediately.
  • DispatchQueue.global: A set of global concurrent queues, each of which manage their own pool of threads. Depending on the priority of your task, you can specify which specific queue to execute your task on, although you should resort to using default most of the time. Because tasks on these queues are executed concurrently, it doesn't guarantee preservation of the order in which tasks were queued.

Notice how we're not dealing with individual threads anymore? We're dealing with queues which manage a pool of threads internally, and you will shortly see why queues are a much more sustainable approach to multhreading.

Serial Queues: The Main Thread

As an exercise, let's look at a snippet of code below, which gets fired when the user presses a button in the app. The expensive compute function can be anything. Let's pretend it is post-processing an image stored on the device.

import UIKit

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        compute()
    }

    private func compute() -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter += 1
        }
    }
}

At first glance, this may look harmless, but if you run this inside of a real app, the UI will freeze completely until the loop is terminated, which will take... a while. We can prove it by profiling this task in Instruments. You can fire up the Time Profiler module of Instruments by going to Xcode > Open Developer Tool > Instruments in Xcode's menu options. Let's look at the Threads module of the profiler and see where the CPU usage is highest.

We can see that the Main Thread is clearly at 100% capacity for almost 5 seconds. That's a non-trivial amount of time to block the UI. Looking at the call tree below the chart, we can see that the Main Thread is at 99.9% capacity for 4.43 seconds! Given that a serial queue works in a FIFO manner, tasks will always complete in the order in which they were inserted. Clearly the compute() method is the culprit here. Can you imagine clicking a button just to have the UI freeze up on you for that long?

Background Threads

How can we make this better? DispatchQueue.global() to the rescue! This is where background threads come in. Referring to the GCD architecture diagram above, we can see that anything that is not the Main Thread is a background thread in iOS. They can run alongside the Main Thread, leaving it fully unoccupied and ready to handle other UI events like scrolling, responding to user events, animating etc. Let's make a small change to our button click handler above:

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        DispatchQueue.global(qos: .userInitiated).async { [unowned self] in
            self.compute()
        }
    }

    private func compute() -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter += 1
        }
    }
}

Unless specified, a snippet of code will usually default to execute on the Main Queue, so in order to force it to execute on a different thread, we'll wrap our compute call inside of an asynchronous closure that gets submitted to the DispatchQueue.global queue. Keep in mind that we aren't really managing threads here. We're submitting tasks (in the form of closures or blocks) to the desired queue with the assumption that it is guaranteed to execute at some point in time. The queue decides which thread to allocate the task to, and it does all the hard work of assessing system requirements and managing the actual threads. This is the magic of Grand Central Dispatch. As the old adage goes, you can't improve what you can't measure. So we measured our truly terrible button click handler, and now that we've improved it, we'll measure it once again to get some concrete data with regards to performance.

Looking at the profiler again, it's quite clear to us that this is a huge improvement. The task takes an identical amount of time, but this time, it's happening in the background without locking up the UI. Even though our app is doing the same amount of work, the perceived performance is much better because the user will be free to do other things while the app is processing.

You may have noticed that we accessed a global queue of .userInitiated priority. This is an attribute we can use to give our tasks a sense of urgency. If we run the same task on a global queue of and pass it a qos attribute of background , iOS will think it's a utility task, and thus allocate fewer resources to execute it. So, while we don't have control over when our tasks get executed, we do have control over their priority.

A Note on Main Thread vs. Main Queue

You might be wondering why the Profiler shows "Main Thread" and why we're referring to it as the "Main Queue". If you refer back to the GCD architecture we described above, the Main Queue is solely responsible for managing the Main Thread. The Dispatch Queues section in the Concurrency Programming Guide says that "the main dispatch queue is a globally available serial queue that executes tasks on the application’s main thread. Because it runs on your application’s main thread, the main queue is often used as a key synchronization point for an application."

The terms "execute on the Main Thread" and "execute on the Main Queue" can be used interchangeably.


Concurrent Queues

So far, our tasks have been executed exclusively in a serial manner. DispatchQueue.main is by default a serial queue, and DispatchQueue.global gives you four concurrent dispatch queues depending on the priority parameter you pass in.

Let's say we want to take five images, and have our app process them all in parallel on background threads. How would we go about doing that? We can spin up a custom concurrent queue with an identifier of our choosing, and allocate those tasks there. All that's required is the .concurrent attribute during the construction of the queue.

class ViewController: UIViewController {
    let queue = DispatchQueue(label: "com.app.concurrentQueue", attributes: .concurrent)
    let images: [UIImage] = [UIImage].init(repeating: UIImage(), count: 5)

    @IBAction func handleTap(_ sender: Any) {
        for img in images {
            queue.async { [unowned self] in
                self.compute(img)
            }
        }
    }

    private func compute(_ img: UIImage) -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter += 1
        }
    }
}

Running that through the profiler, we can see that the app is now spinning up 5 discrete threads to parallelize a for-loop.

Parallelization of N Tasks

So far, we've looked at pushing computationally expensive task(s) onto background threads without clogging up the UI thread. But what about executing parallel tasks with some restrictions? How can Spotify download multiple songs in parallel, while limiting the maximum number up to 3? We can go about this in a few ways, but this is a good time to explore another important construct in multithreaded programming: semaphores.

Semaphores are signaling mechanisms. They are commonly used to control access to a shared resource. Imagine a scenario where a thread can lock access to a certain section of the code while it executes it, and unlocks after it's done to let other threads execute the said section of the code. You would see this type of behavior in database writes and reads, for example. What if you want only one thread writing to a database and preventing any reads during that time? This is a common concern in thread-safety called Readers-writer lock. Semaphores can be used to control concurrency in our app by allowing us to lock n number of threads.

let kMaxConcurrent = 3 // Or 1 if you want strictly ordered downloads!
let semaphore = DispatchSemaphore(value: kMaxConcurrent)
let downloadQueue = DispatchQueue(label: "com.app.downloadQueue", attributes: .concurrent)

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        for i in 0..<15 {
            downloadQueue.async { [unowned self] in
                // Lock shared resource access
                semaphore.wait()

                // Expensive task
                self.download(i + 1)

                // Update the UI on the main thread, always!
                DispatchQueue.main.async {
                    tableView.reloadData()

                    // Release the lock
                    semaphore.signal()
                }
            }
        }
    }

    func download(_ songId: Int) -> Void {
        var counter = 0

        // Simulate semi-random download times.
        for _ in 0..<Int.random(in: 999999...10000000) {
            counter += songId
        }
    }
}

Notice how we've effectively restricted our download system to limit itself to k number of downloads. The moment one download finishes (or thread is done executing), it decrements the semaphore, allowing the managing queue to spawn another thread and start downloading another song. You can apply a similar pattern to database transactions when dealing with concurrent reads and writes.

Semaphores usually aren't necessary for code like the one in our example, but they become more powerful when you need to enforce synchronous behavior whille consuming an asynchronous API. The above could would work just as well with a custom NSOperationQueue with a maxConcurrentOperationCount, but it's a worthwhile tangent regardless.


Finer Control with OperationQueue

GCD is great when you want to dispatch one-off tasks or closures into a queue in a 'set-it-and-forget-it' fashion, and it provides a very lightweight way of doing so. But what if we want to create a repeatable, structured, long-running task that produces associated state or data? And what if we want to model this chain of operations such that they can be cancelled, suspended and tracked, while still working with a closure-friendly API? Imagine an operation like this:

This would be quite cumbersome to achieve with GCD. We want a more modular way of defining a group of tasks while maintaining readability and also exposing a greater amount of control. In this case, we can use Operation objects and queue them onto an OperationQueue, which is a high-level wrapper around DispatchQueue. Let's look at some of the benefits of using these abstractions and what they offer in comparison to the lower-level GCI API:

  • You may want to create dependencies between tasks, and while you could do this via GCD, you're better off defining them concretely as Operation objects, or units of work, and pushing them onto your own queue. This would allow for maximum reusability since you may use the same pattern elsewhere in an application.
  • The Operation and OperationQueue classes have a number of properties that can be observed, using KVO (Key Value Observing). This is another important benefit if you want to monitor the state of an operation or operation queue.
  • Operations can be paused, resumed, and cancelled. Once you dispatch a task using Grand Central Dispatch, you no longer have control or insight into the execution of that task. The Operation API is more flexible in that respect, giving the developer control over the operation's life cycle.
  • OperationQueue allows you to specify the maximum number of queued operations that can run simultaneously, giving you a finer degree of control over the concurrency aspects.

The usage of Operation and OperationQueue could fill an entire blog post, but let's look at a quick example of what modeling dependencies looks like. (GCD can also create dependencies, but you're better off dividing up large tasks into a series of composable sub-tasks.) In order to create a chain of operations that depend on one another, we could do something like this:

class ViewController: UIViewController {
    var queue = OperationQueue()
    var rawImage = UIImage? = nil
    let imageUrl = URL(string: "https://example.com/portrait.jpg")!
    @IBOutlet weak var imageView: UIImageView!

    let downloadOperation = BlockOperation {
        let image = Downloader.downloadImageWithURL(url: imageUrl)
        OperationQueue.main.async {
            self.rawImage = image
        }
    }

    let filterOperation = BlockOperation {
        let filteredImage = ImgProcessor.addGaussianBlur(self.rawImage)
        OperationQueue.main.async {
            self.imageView = filteredImage
        }
    }

    filterOperation.addDependency(downloadOperation)

    [downloadOperation, filterOperation].forEach {
        queue.addOperation($0)
     }
}

So why not opt for a higher level abstraction and avoid using GCD entirely? While GCD is ideal for inline asynchronous processing, Operation provides a more comprehensive, object-oriented model of computation for encapsulating all of the data around structured, repeatable tasks in an application. Developers should use the highest level of abstraction possible for any given problem, and for scheduling consistent, repeated work, that abstraction is Operation. Other times, it makes more sense to sprinkle in some GCD for one-off tasks or closures that we want to fire. We can mix both OperationQueue and GCD to get the best of both worlds.


The Cost of Concurrency

DispatchQueue and friends are meant to make it easier for the application developer to execute code concurrently. However, these technologies do not guarantee improvements to the efficiency or responsiveness in an application. It is up to you to use queues in a manner that is both effective and does not impose an undue burden on other resources. For example, it's totally viable to create 10,000 tasks and submit them to a queue, but doing so would allocate a nontrivial amount of memory and introduce a lot of overhead for the allocation and deallocation of operation blocks. This is the opposite of what you want! It's best to profile your app thoroughly to ensure that concurrency is enhancing your app's performance and not degrading it.

We've talked about how concurrency comes at a cost in terms of complexity and allocation of system resources, but introducing concurrency also brings a host of other risks like:

  • Deadlock: A situation where a thread locks a critical portion of the code and can halt the application's run loop entirely. In the context of GCD, you should be very careful when using the dispatchQueue.sync { } calls as you could easily get yourself in situations where two synchronous operations can get stuck waiting for each other.
  • Priority Inversion: A condition where a lower priority task blocks a high priority task from executing, which effectively inverts their priorities. GCD allows for different levels of priority on its background queues, so this is quite easily a possibility.
  • Producer-Consumer Problem: A race condition where one thread is creating a data resource while another thread is accessing it. This is a synchronization problem, and can be solved using locks, semaphores, serial queues, or a barrier dispatch if you're using concurrent queues in GCD.
  • ...and many other sorts of locking and data-race conditions that are hard to debug! Thread safety is of the utmost concern when dealing with concurrency.

Parting Thoughts + Further Reading

If you've made it this far, I applaud you. Hopefully this article gives you a lay of the land when it comes to multithreading techniques on iOS, and how you can use some of them in your app. We didn't get to cover many of the lower-level constructs like locks, mutexes and how they help us achieve synchronization, nor did we get to dive into concrete examples of how concurrency can hurt your app. We'll save those for another day, but you can dig into some additional reading and videos if you're eager to dive deeper.




rre

The entropy of holomorphic correspondences: exact computations and rational semigroups. (arXiv:2004.13691v1 [math.DS] CROSS LISTED)

We study two notions of topological entropy of correspondences introduced by Friedland and Dinh-Sibony. Upper bounds are known for both. We identify a class of holomorphic correspondences whose entropy in the sense of Dinh-Sibony equals the known upper bound. This provides an exact computation of the entropy for rational semigroups. We also explore a connection between these two notions of entropy.




rre

Complete reducibility: Variations on a theme of Serre. (arXiv:2004.14604v2 [math.GR] UPDATED)

In this note, we unify and extend various concepts in the area of $G$-complete reducibility, where $G$ is a reductive algebraic group. By results of Serre and Bate--Martin--R"{o}hrle, the usual notion of $G$-complete reducibility can be re-framed as a property of an action of a group on the spherical building of the identity component of $G$. We show that other variations of this notion, such as relative complete reducibility and $sigma$-complete reducibility, can also be viewed as special cases of this building-theoretic definition, and hence a number of results from these areas are special cases of more general properties.




rre

Finite dimensional simple modules of $(q, mathbf{Q})$-current algebras. (arXiv:2004.11069v2 [math.RT] UPDATED)

The $(q, mathbf{Q})$-current algebra associated with the general linear Lie algebra was introduced by the second author in the study of representation theory of cyclotomic $q$-Schur algebras. In this paper, we study the $(q, mathbf{Q})$-current algebra $U_q(mathfrak{sl}_n^{langle mathbf{Q} angle}[x])$ associated with the special linear Lie algebra $mathfrak{sl}_n$. In particular, we classify finite dimensional simple $U_q(mathfrak{sl}_n^{langle mathbf{Q} angle}[x])$-modules.




rre

Exotic Springer fibers for orbits corresponding to one-row bipartitions. (arXiv:1810.03731v2 [math.RT] UPDATED)

We study the geometry and topology of exotic Springer fibers for orbits corresponding to one-row bipartitions from an explicit, combinatorial point of view. This includes a detailed analysis of the structure of the irreducible components and their intersections as well as the construction of an explicit affine paving. Moreover, we compute the ring structure of cohomology by constructing a CW-complex homotopy equivalent to the exotic Springer fiber. This homotopy equivalent space admits an action of the type C Weyl group inducing Kato's original exotic Springer representation on cohomology. Our results are described in terms of the diagrammatics of the one-boundary Temperley-Lieb algebra (also known as the blob algebra). This provides a first step in generalizing the geometric versions of Khovanov's arc algebra to the exotic setting.




rre

A regularity criterion of the 3D MHD equations involving one velocity and one current density component in Lorentz. (arXiv:2005.03377v1 [math.AP])

In this paper, we study the regularity criterion of weak solutions to the three-dimensional (3D) MHD equations. It is proved that the solution $(u,b)$ becomes regular provided that one velocity and one current density component of the solution satisfy% egin{equation} u_{3}in L^{frac{30alpha }{7alpha -45}}left( 0,T;L^{alpha ,infty }left( mathbb{R}^{3} ight) ight) ext{ with }frac{45}{7}% leq alpha leq infty , label{eq01} end{equation}% and egin{equation} j_{3}in L^{frac{2eta }{2eta -3}}left( 0,T;L^{eta ,infty }left( mathbb{R}^{3} ight) ight) ext{ with }frac{3}{2}leq eta leq infty , label{eq02} end{equation}% which generalize some known results.




rre

Irreducible representations of Braid Group $B_n$ of dimension $n+1$. (arXiv:2005.03105v1 [math.GR])

We prove that there are no irreducible representations of $B_n$ of dimension $n+1$ for $ngeq 10.$




rre

Recurrent Neural Network Language Models Always Learn English-Like Relative Clause Attachment. (arXiv:2005.00165v3 [cs.CL] UPDATED)

A standard approach to evaluating language models analyzes how models assign probabilities to valid versus invalid syntactic constructions (i.e. is a grammatical sentence more probable than an ungrammatical sentence). Our work uses ambiguous relative clause attachment to extend such evaluations to cases of multiple simultaneous valid interpretations, where stark grammaticality differences are absent. We compare model performance in English and Spanish to show that non-linguistic biases in RNN LMs advantageously overlap with syntactic structure in English but not Spanish. Thus, English models may appear to acquire human-like syntactic preferences, while models trained on Spanish fail to acquire comparable human-like preferences. We conclude by relating these results to broader concerns about the relationship between comprehension (i.e. typical language model use cases) and production (which generates the training data for language models), suggesting that necessary linguistic biases are not present in the training signal at all.




rre

Quantum correlation alignment for unsupervised domain adaptation. (arXiv:2005.03355v1 [quant-ph])

Correlation alignment (CORAL), a representative domain adaptation (DA) algorithm, decorrelates and aligns a labelled source domain dataset to an unlabelled target domain dataset to minimize the domain shift such that a classifier can be applied to predict the target domain labels. In this paper, we implement the CORAL on quantum devices by two different methods. One method utilizes quantum basic linear algebra subroutines (QBLAS) to implement the CORAL with exponential speedup in the number and dimension of the given data samples. The other method is achieved through a variational hybrid quantum-classical procedure. In addition, the numerical experiments of the CORAL with three different types of data sets, namely the synthetic data, the synthetic-Iris data, the handwritten digit data, are presented to evaluate the performance of our work. The simulation results prove that the variational quantum correlation alignment algorithm (VQCORAL) can achieve competitive performance compared with the classical CORAL.




rre

Enabling Cross-chain Transactions: A Decentralized Cryptocurrency Exchange Protocol. (arXiv:2005.03199v1 [cs.CR])

Inspired by Bitcoin, many different kinds of cryptocurrencies based on blockchain technology have turned up on the market. Due to the special structure of the blockchain, it has been deemed impossible to directly trade between traditional currencies and cryptocurrencies or between different types of cryptocurrencies. Generally, trading between different currencies is conducted through a centralized third-party platform. However, it has the problem of a single point of failure, which is vulnerable to attacks and thus affects the security of the transactions. In this paper, we propose a distributed cryptocurrency trading scheme to solve the problem of centralized exchanges, which can achieve trading between different types of cryptocurrencies. Our scheme is implemented with smart contracts on the Ethereum blockchain and deployed on the Ethereum test network. We not only implement transactions between individual users, but also allow transactions between multiple users. The experimental result proves that the cost of our scheme is acceptable.




rre

Football High: Garrett Harper's Story, Part II

The decisions coaches make on the sidelines about returning a concussed player to the game or not can be a "game changer" for that athlete's life.




rre

Football High: Garrett Harper's Story, Part I

For many competitive high school football players like Garrett Harper, the intensity of this contact sport has its price.




rre

Correlating data from multiple business processes to a business process scenario

The present disclosure involves systems, software, and computer-implemented methods for providing process intelligence by correlating events from multiple business process systems to a single business scenario using configurable correlation strategies. An example method includes identifying a raw event associated with a sending business process and a receiving business process, identifying a sending business process attribute associated with the sending business process and a receiving business process attribute associated with the receiving business process, determining a correlation strategy for associating the raw event with a business scenario instance, the determination based at least in part on the sending business process attribute and the receiving business process attribute, and generating a visibility scenario event from the raw event according to the correlation strategy, the visibility scenario event associated with the business scenario instance.




rre

Co-current catalyst flow with feed for fractionated feed recombined and sent to high temperature reforming reactors

A process is presented for the increasing the yields of aromatics from reforming a hydrocarbon feedstream. The process includes splitting a naphtha feedstream into a light hydrocarbon stream, and a heavier stream having a relatively rich concentration of naphthenes. The heavy stream is reformed to convert the naphthenes to aromatics and the resulting product stream is further reformed with the light hydrocarbon stream to increase the aromatics yields. The catalyst is passed through the reactors in a sequential manner.




rre

Method for quenching paraffin dehydrogenation reaction in counter-current reactor

A process is presented for quenching a process stream in a paraffin dehydrogenation process. The process comprises cooling a propane dehydrogenation stream during the hot residence time after the process stream leaves the catalytic bed reactor section. The process includes cooling and compressing the product stream, taking a portion of the product stream and passing the portion of the product stream to the mix with the process stream as it leaves the catalytic bed reactor section.




rre

Optical element for correcting color blindness

Described herein are devices, compositions, and methods for improving color discernment.




rre

Techniques for reusing components of a logical operations functional block as an error correction code correction unit

A logical operations functional block for an execution unit of a processor includes a first input data link for a first operand and a second input data link for a second operand. The execution unit includes a register connected to an error correction code detection unit. The logical operations functional block includes a look-up table configured to receive an error correction code syndrome from the error correction code detection unit. The logical operations functional block also includes a multiplexer configured to receive an output signal from the look-up table at a first input and the first operand at a second input, wherein an output of the multiplexer is coupled to the first input data link of a logical functional unit.




rre

Error detection and correction apparatus and method

Embodiments of apparatus and methods for error detection and correction are described. A codeword may have a data portion and associated check bits. In embodiments, one or more error detection modules may be configured to detect a plurality of error types in the codeword. One or more error correction modules coupled with the one or more error detection modules may be further configured to correct errors of the plurality of error types once they are detected by the one or more error detection modules. Other embodiments may be described and/or claimed.




rre

Packet transmission/reception apparatus and method using forward error correction scheme

A packet transmission/reception apparatus and method is provided. The packet transmission method of the present invention includes acquiring a source payload including partial source symbols from a source block, generating a source packet including the source payload and an identifier (ID) of the source payload, generating a repair packet including a repair payload corresponding to the source payload and an ID of the repair payload, generating a Forward Error Correction (FEC) packet block including the source and repair packets, and transmitting the FEC packet block. The source payload ID includes a source payload sequence number incrementing by 1 per source packet. The packet transmission/reception method of the present invention is advantageous in improving error correction capability and network resource utilization efficiency.




rre

Method and apparatus for error-correction in and processing of GFP-T superblocks

The present invention discloses a method and apparatus for processing and error correction of a GFP-T superblock, where the 64 bytes of payload data of a first superblock are buffered in the first page of a two-page buffer. The flag byte is buffered in a separate buffer, and a CRC operation is performed in a separate logic element. The result of the CRC operation is checked against a single syndrome table which may indicate single- or multi-bit errors. As the payload data of the first superblock is processed and read out of the first page of the two-page buffer, the payload data of a second superblock is written into the second page of the two-page buffer to be processed and corrected.




rre

Systems, methods and devices for multi-tiered error correction

An error control encoding system produces a codeword from a data word, where the resulting codeword includes the data word and three or more parity segments produced using the data word. The system includes a first encoder to encode the data word in two or more first data segments in order to produce two or more first parity segments, where each of the two or more first data segments includes a respective sequential portion of the data word. The system includes a second encoder to encode the data word in one or more second data segments in order to produce a corresponding one or more second parity segments, where each of the one or more second data segments includes a respective sequential portion of the data word, and each of the one or more second data segments also includes a sequential portion of the data included in a plurality of the two or more first data segments. Further, the system includes a controller configured to provide the two or more first data segments of the data word to the first encoder for encoding and to provide the one or more second data segments of the data word to the second encoder for encoding.




rre

High speed and low power circuit structure for barrel shifter

A barrel shifter uses a sign magnitude to 2's complement converter to generate decoder signals for its cascaded multiplexer selectors. The sign input receives the shift direction and the magnitude input receives the shift amount. The sign magnitude to 2's complement converter computes an output result as a 2's complement of the shift amount using the shift direction as a sign input, assigns a first portion (most significant bit half) of the output result to a first decoder signal, and assigns a second portion (least significant bit half) of the output result to a second decoder signal. This encoding scheme allows the decoder circuits to be relatively simple, for example, 3-to-8 decoders for an implementation adapted to shift a 64-bit operand value rather than the 4-to-9 decoder required in a conventional barrel shifter, leading to faster operation, less area, and reduced power consumption.




rre

Apparatuses enabling concurrent communication between an interface die and a plurality of dice stacks, interleaved conductive paths in stacked devices, and methods for forming and operating the same

Various embodiments include apparatuses, stacked devices and methods of forming dice stacks on an interface die. In one such apparatus, a dice stack includes at least a first die and a second die, and conductive paths coupling the first die and the second die to the common control die. In some embodiments, the conductive paths may be arranged to connect with circuitry on alternating dice of the stack. In other embodiments, a plurality of dice stacks may be arranged on a single interface die, and some or all of the dice may have interleaving conductive paths.




rre

Method of reducing downward flow of air currents on the lee side of exterior structures

A method of reducing the downward flow of air currents on the leeward side of an emissions emitting structure including the step of using a system that includes components chosen from the group consisting of one or more mechanical air moving devices; physical structures; and combinations thereof to create an increase in the air pressure within a volume of air on the leeward side of an emissions emitting structure having emissions that become airborne. The increased air pressure prevents or lessens downward flow of emissions that would occur without the use of the system and increases the safety by which one can travel a road or other transportation route that might otherwise be visually obscured by the emissions and the safety of the property and those within the area where emissions occur.




rre

Indirect designation of physical configuration number as logical configuration number based on correlation information, within parallel computing

A computing section is provided with a plurality of computing units and correlatively stores entries of configuration information that describes configurations of the plurality of computing units with physical configuration numbers that represent the entries of configuration information and executes a computation in a configuration corresponding to a designated physical configuration number. A status management section designates a physical configuration number corresponding to a status to which the computing section needs to advance the next time for the computing section and outputs the status to which the computing section needs to advance the next time as a logical status number that uniquely identifies the status to which the computing section needs to advance the next time in an object code. A determination section determines whether or not the computing section has stored an entry of configuration information corresponding to the status to which the computing section needs to advance the next time based on the logical status number that is output from the status management section. A rewriting section correlatively stores the entry of the configuration information and a physical configuration number corresponding to the entry of the configuration information in the computing section when the determination section determines that the computing section has not stored the entry of configuration information corresponding to the status to which the computing section needs to advance the next time.




rre

Methods and apparatus for storing expanded width instructions in a VLIW memory for deferred execution

Techniques are described for decoupling fetching of an instruction stored in a main program memory from earliest execution of the instruction. An indirect execution method and program instructions to support such execution are addressed. In addition, an improved indirect deferred execution processor (DXP) VLIW architecture is described which supports a scalable array of memory centric processor elements that do not require local load and store units.




rre

System, method and computer program product for recursively executing a process control operation to use an ordered list of tags to initiate corresponding functional operations

In accordance with embodiments, there are provided mechanisms and methods for controlling a process using a process map. These mechanisms and methods for controlling a process using a process map can enable process operations to execute in order without necessarily having knowledge of one another. The ability to provide the process map can avoid a requirement that the operations themselves be programmed to follow a particular sequence, as can further improve the ease by which the sequence of operations may be changed.




rre

Methods and systems to identify and reproduce concurrency violations in multi-threaded programs using expressions

Methods and systems to identify and reproduce concurrency bugs in multi-threaded programs are disclosed. An example method disclosed herein includes defining a data type. The data type includes a first predicate associated with a first thread of a multi-threaded program that is associated with a first condition, a second predicate that is associated with a second thread of the multi-threaded program, the second predicate being associated with a second condition, and an expression that defines a relationship between the first predicate and the second predicate. The relationship, when satisfied, causes the concurrency bug to be detected. A concurrency bug detector conforming to the data type is used to detect the concurrency bug in the multi-threaded program.




rre

Prediction of dynamic current waveform and spectrum in a semiconductor device

A method for accurately determining the shape of currents in a current spectrum for a circuit design is provided. The method includes determining timing and power consumption characteristics. In one embodiment, timing characteristics are provided through a electronic design automation tool. The timing characteristics yield a current pulse time width. In another embodiment, power consumption characteristics are provided by an EDA tool. The power consumption characteristics yield a current pulse amplitude. The shape of the current pulse is obtained by incrementally processing a power analyzer tool over relatively small time increments over one or more clock cycles while capturing the switching nodes of a simulation of the circuit design for each time increment. In one embodiment, the time increments are one nanosecond or less.




rre

Fast-cycling, conduction-cooled, quasi-isothermal, superconducting fault current limiter

Fault Current Limiters (FCL) provide protection for upstream and/or downstream devices in electric power grids. Conventional FCL require the use of expensive conductors and liquid or gas cryogen handling. Disclosed embodiments describe FCL systems and devices that use lower cost superconductors, require no liquid cryogen, and are fast cycling. These improved FCL can sustain many sequential faults and require less time to clear faults while avoiding the use of liquid cryogen. Disclosed embodiments describe a FCL with a superconductor and cladding cooled to cryogenic temperatures; these are connected in parallel with a second resistor across two nodes in a circuit. According to disclosed embodiments, the resistance of the superconducting components and its sheath in the fault mode are sufficiently high to minimize energy deposition within the cryogenic system, minimizing recovery time. A scheme for intermediate heat storage also is described which allows a useful compromise between conductor length enabled energy minimization and allowable number of sequential faults to enable an overall system design which is affordable, and yet allows conduction cooled (cryogen free) systems which have fast recovery and allows for multiple sequential faults.




rre

Superconducting direct-current electrical cable

A superconductive electrical direct current cable with at least two conductors insulated relative to each other is indicated, where the cable is placed with at least two conductors insulated relative to each other, where the conductors are arranged in a cryostat suitable for guidance of the cooling agent, wherein the cryostat is composed of at least one metal pipe which is surrounded by a circumferentially closed layer with thermally insulating properties. In the cryostat is arranged a strand-shaped carrier composed of insulating material, where the carrier has at least two diametrically oppositely located outwardly open grooves in each of which is arranged one of the conductors. Each conductor is composed of a plurality of superconductive elements.




rre

Inductive fault current limiter with divided secondary coil configuration

An inductive fault current limiter (1), has a normally conducting primary coil assembly (2) with a multiplicity of turns (3), and a superconducting, short-circuited secondary coil assembly (4). The primary coil assembly (2) and the secondary coil assembly (4) are disposed at least substantially coaxially with respect to each other and at least partially interleaved in each other. The secondary coil assembly (4) has a first coil section (4a) disposed radially inside the turns (3) of the primary coil assembly (2) and a second coil section (4b) disposed radially outside the turns (3) of the primary coil assembly (2). The fault current limiter has an increased inductance ratio.




rre

Audio controlling apparatus, audio correction apparatus, and audio correction method

According to one embodiment, an audio controlling apparatus includes a first receiver configured to receive audio signal, a second receiver configured to receive environmental sound, a temporary gain calculator configured to calculate temporary gain based on environmental sound received by second receiver, a sound type determination module configured to determine sound type of main component of audio signal received by first receiver, and a gain controller configured to stabilize temporary gain that is calculated by temporary gain calculator and set gain, when it is determined that sound type of main component of audio signal received by first receiver is music.




rre

Device, method, and graphical user interface for managing concurrently open software applications

A method includes displaying a first application view. A first input is detected, and an application view selection mode is entered for selecting one of concurrently open applications for display in a corresponding application view. An initial group of open application icons in a first predefined area and at least a portion of the first application view adjacent to the first predefined area are concurrently displayed. The initial group of open application icons corresponds to at least some of the concurrently open applications. A gesture is detected on a respective open application icon in the first predefined area, and a respective application view for a corresponding application is displayed without concurrently displaying an application view for any other application in the concurrently open applications. The open application icons in the first predefined area cease to be displayed, and the application view selection mode is exited.




rre

Automatic detection and correction of magnetic resonance imaging data

Systems and methods for processing magnetic resonance imaging (MRI) data are provided. A method includes receiving MRI data comprising a plurality of k-space points and deriving a plurality of image data sets based on the MRI data, each of the plurality of MRI image sets obtained by zeroing a different one of the plurality of k-space points. The method further includes computing image space metric values for each of the plurality of image data sets and adjusting a portion of the MRI data associated with ones of the image space metric values that fail to meet a threshold value to yield adjusted MRI data.




rre

Heart rate correction system and methods for the detection of cardiac events

A device for detecting a cardiac event is disclosed. Detection of an event is based on a test applied to a parameter whose value varies according to heart rate. Both the parameter value and heart rate (RR interval) are filtered with an exponential average filter. From these filtered values, the average change in the parameter and the RR interval are also computed with an exponential average filter. Before computing the average change in the parameter, large changes in the parameter over short times, which may be caused by body position shifts, are attenuated are removed, so that the average change represents an average of small/smooth changes in the parameter's value that are characteristic of acute ischemia, one of the cardiac events that may be detected. The test to detect the cardiac event depends on the heart rate, the difference between the parameter's value and its upper and lower normal values, and its average change over time, adjusted for heart rate changes. The upper and lower normal parameter values as a function of heart rate are determined from long term stored data of the filtered RR values and parameter values. Hysteeresis related data and transitory deviations from normal (e.g. vasospasm related data) are excluded from the computation of normal upper and lower parameter bounds.




rre

Exposure correction for scanners

A method and apparatus for exposure correction in scanners are disclosed. In the method, exposure is corrected for pixels received in an image sensor array. Exposure time is tracked for the incoming pixel data and a calibration factor is determined for correcting the gain, and the calibration factor is adjusted based on the tracked exposure time. In the apparatus, a scanner includes an illumination source and a sensor for receiving pixel data. Using values stored in a memory, circuitry is provided for determining a calibration factor, for tracking exposure time for the pixel data and for adjusting the pixel data based on the calibration factor and exposure time.




rre

Meter electronics and fluid quantification method for a fluid being transferred

Meter electronics (20) for quantifying a fluid being transferred is provided. The meter electronics (20) includes an interface (201) configured to communicate with a flowmeter assembly of a vibratory flowmeter and receive a vibrational response and a processing system (203) coupled to the interface (201). The processing system (203) is configured to measure a volume flow and a density for a predetermined time portion of the fluid transfer, determine if the fluid transfer is non-aerated during the predetermined time portion, if the predetermined time portion is non-aerated then add a volume-density product to an accumulated volume-density product and add the volume flow to an accumulated volume flow, and determine a non-aerated volume-weighted density for the fluid transfer by dividing the accumulated volume-density product by the accumulated volume flow.




rre

Irreversible color changing ink compositions

The disclosure is generally related to an irreversible thermochromic ink composition and, more particularly, to an irreversible thermochromic ink composition comprising a carrier and thermochromic capsules, the thermochromic capsules comprising a shell and a core, the core comprising an eradicable dye capable of becoming substantially colorless and/or of changing color from a first colored state to a second colored state when exposed to an eradicator.




rre

Irreversible thermochromic ink compositions

An irreversible thermochromic ink composition can include thermochromic pigment capsules dispersed in a carrier. The irreversible thermochromic pigment capsules can include an inner core having a color changing dye, a color activator for activating the color changing dye, and a wax, an outer core surrounding the inner core and comprising a color destroying agent, and a shell surrounding the outer core. Alternatively, the irreversible thermochromic pigment capsules can include an inner core having a color destroying agent and a wax, an outer core surrounding the inner core and comprising a color changing dye and a color activator for activating the color changing dye, and a shell surrounding the outer core. Written marks made with the irreversible thermochromic inks can be rendered a different color or substantially colorless by application of a sufficient amount of heat to melt or substantially liquefy the wax in the irreversible thermochromic pigment capsules.




rre

Counter-current multistage fischer tropsch reactor systems

The invention discloses an improved multistage fischer tropsch process scheme for the production of hydrocarbon fuels comprising feeding gaseous phase syngas and liquid stream hydrocarbons in a counter current manner such as herein described into the reaction vessel at a number of stages containing reaction catalysts; wherein fresh syngas enters into the stage where the product liquid stream leaves and the fresh liquid stream enters into the stage where the unreacted syngas leaves; wherein further the temperature of each stage can be controlled independently. More particularly the invention relates to improving the heat release in different reactors, product selectivity and reactor productivity of FT reactors.




rre

Chiropractic posture correction tool

The claimed invention provides an improved posture correction tool in the form of a table to be used by chiropractic practitioners to treat mechanical disorders of the spine and musculoskeletal system. The improved posture correction tool provides a plurality of pads to support the various major areas of the body and has built in drop capability and adjustment capability for the pelvic pad, the lumbar pad, the thoracic pad and the head and cervical area. The claimed invention also has a novel cervical support.




rre

Fluid mixer using countercurrent injection

A method and apparatus for mixing fluids, such as beverage syrup and water, uses countercurrent injection to improve blending of the fluids. A mixing chamber has a first inlet through which a first fluid is fed to the mixing chamber, and a second inlet through which a countercurrent injection nozzle extends and is operative to inject a second fluid into a stream of the first fluid. The countercurrent injection nozzle is equipped with a check valve to control the flow of fluid into the mixing chamber. The mixing chamber may include additional inlets that may be fitted with countercurrent injection nozzles to permit the countercurrent injection of other fluid, such as flavorings, into the stream of the first fluid.




rre

Eddy current minimizing flow plug for use in flow conditioning and flow metering

An eddy-current-minimizing flow plug has an outer radial wall with open flow channels formed between the plug's inlet and outlet. The plug has a central region coupled to the inner surface of the outer radial wall. Each open flow channel includes (i) a first portion originating at the inlet and converging to a location in the plug where convergence is contributed to by changes in thickness of the outer radial wall and divergence of the central region, and (ii) a second portion originating in the plug and diverging to the outlet where divergence is contributed to by changes in thickness of the outer radial wall and convergence of the central region. For at least a portion of the open flow channels, a central axis passing through the first and second portions is non-parallel with respect to the given direction of the flow.




rre

Tapered barrel twin shaft preconditioner

An improved dual-shaft preconditioner (10) of simplified design is provided giving increased moisturization and partial cooking of food or feed ingredients. The preconditioner (10) includes an elongated, tapered housing (12) presenting a pair of side-by-side, communicating housing sections (58, 60), with a corresponding pair of converging shafts (20, 22) within the sections (58, 60) and having a series of elongated, outwardly extending mixing elements (24, 26) secured to the shafts (20, 22). The preconditioner (10) is designed for use in a system including a downstream processing device, such as an extruder (146).




rre

High sensitivity eddy current monitoring system

A method of chemical mechanical polishing a substrate includes polishing a metal layer on the substrate at a polishing station, monitoring thickness of the metal layer during polishing at the polishing station with an eddy current monitoring system, and controlling pressures applied by a carrier head to the substrate during polishing of the metal layer at the polishing station based on thickness measurements of the metal layer from the eddy current monitoring system to reduce differences between an expected thickness profile of the metal layer and a target profile, wherein the metal layer has a resistivity greater than 700 ohm Angstroms.




rre

Soybean plant and seed corresponding to transgenic event MON87769 and methods for detection thereof

The present invention provides transgenic soybean event MON87769, and cells, seeds, and plants comprising DNA diagnostic for the soybean event. The invention also provides compositions comprising nucleotide sequences that are diagnostic for said soybean event in a sample, methods for detecting the presence of said soybean event nucleotide sequences in a sample, probes and primers for use in detecting nucleotide sequences that are diagnostic for the presence of said soybean event in a sample, growing the seeds of such soybean event into soybean plants, and breeding to produce soybean plants comprising DNA diagnostic for the soybean event.




rre

Vertical turret lathe

The invention provides a vertical turret lathe capable of preventing an inner diameter turning tool attached to a turret tool rest from interfering with a workpiece during machining of the outer diameter of the workpiece. The vertical turret lathe comprises a work table that holds a workpiece W1 and rotates, and a working head 40 having a turret tool rest 50 and capable of moving in X-axis and Z-axis directions. A tool holder 70 for holding an inner diameter turning tool T2 via hydraulic pressure is attached to a part of the tool supporting portions 60 of the turret tool rest. The automatic tool changer apparatus 100 includes a turret-type tool magazine 120 and pistons 160 and 162, which mechanically press the pins 74 and 76 of the tool holder 70 and clamp or unclamp the tool T2.




rre

Feature value estimation device and corresponding method, and spectral image processing device and corresponding method

An estimation device is configured to estimate a feature value of a specific component contained in a sample and includes: a spectral estimation parameter storage module; a calibration parameter storage module; a multiband image acquirer; an optical spectrum operator configured to compute an optical spectrum from a multiband image using a spectral estimation parameter; and a calibration processor configured to compute the feature value from the optical spectrum using a calibration parameter.




rre

Optical system for occupancy sensing, and corresponding method

An optical system for occupancy sensing according to the invention includes a plurality of optical line sensors, each consisting of a linear array of light sensing elements; and an optical light integrating device that integrates light from rays with incidence angles subject to geometric constraints to be sensed by a light sensing element.