read Best Photos of 2019 by JMG-Galleries Blog Readers By feedproxy.google.com Published On :: Thu, 16 Jan 2020 08:13:58 +0000 I’m excited to share the results of my 13th annual Best of Photos project. 112 photographers from around the world (amateur and professional alike) have shared their best photos of 2019. I’m always amazed at the quality of work shared and I hope it’s a source inspiration to you for the coming year. For those who are new to my blog project, photographers taking part span the gamut of photo enthusiasts to professionals. The great thing about photography is that no matter what your skill level we all can relate equally in our love for the art of photography and visually exploring. With that in mind I encourage you to reach out to photographers whose work you enjoy to keep sharing & growing as an artist. I am incredibly thankful that this tradition has been embraced and enjoy seeing how familiar faces have evolved their work & grown over the years. I hope reviewing your best photos of the year and comparing them to years pasts keeps you inspired and aware of your progress as a photographer. If you’d like to take part next year and be informed when submissions open for the “Best Photos of 2020” blog project add your name to my mail list. You won’t be spammed. I send out newsletters quite infrequently. Thank to everyone who took part! I invite you to visit each link below as I have and introduce yourself to many of the participating photographers. Best Photos of 2019 Best Photos of 2019 – JMG-Galleries – Jim M. Goldstein My Top 10 Photographs Of 2019 – Michael Russell My Ten Favorite Photos of 2019 – ADVENTR – Randy Langstraat Favorite Photos of 2019 – T.M. Schultze My Favorite Photos from 2019 – Alexander S. Kunz Best of 2019 – Dave Wilson Harold Davis—Best of 2019 – Harold Davis 2019: A Photographic Retrospective – Johann A. Briffa Top 10 Favorites of 2019 – Stefan Baeurle My Best Natural History Photos of 2019 – Phil Colla My Best Photos From 2019 – Daniel Brinneman Best Photos of 2019 – Peter Tellone Rétrospective des meilleures photos de l’année 2019 – Francis Gagnon Best of 2019 by Rachel Cohen – Rachel Cohen Photo Highlights 2019 – Alan Majchrowicz My Top 10 Nature Photos of 2019 – Greg Vaughn 2019 – The Year in Pictures | Russ Bishop Photography – Russ Bishop 2019 Favorites – A Split Year – Joseph Smith My 12 Favorite Photos of 2019 – Chuq Von Rospach Wild Drake Photography – Drake Dyck Matt Payne Photography – Matt Payne My Favorite Images Of 2019 – Werner Priller Favourites from 2019 – Bryn Tassell My Favorite Photos of 2019 – A Year-End Retrospective – Gary Crabbe / Enlightened Images 2019 Favorite Photographs – Pat Ulrich Without reflection we go blindly on our way – Bjorn Kleemann 2019 – Ten moments – Ramen Saha top photos :: 2019 – Denise Goldberg Changing Perspectives – Best of 2019 – Jenni Brehm Island in the Net – Khürt Williams Best Photos of 2019: My Favorites of the Year – Todd Henson My Ten Best of Images of 2019 – Mike Chowla 2019 Favorite Photos – Alan Dahl Tech Photo Guy – Best Photos 2019 – Aaron Hockley 2019 Favorites – Martin Quinn Best of TheDarkSlides 2019 – TheDarkSlides 2019 Jim Goldstein Project – J.J. RAIA My Favorites of 2019 – Rich Greene My Favorite Images of 2019 (aka ‘Best of 2019’) – Pete Miller 2019 Year in Review, Decade in Review – Robin Black Photography Under Pressure Photography – Scott McGee My favorite Slovenia photos of 2019 – Luka Esenko 5 Moments in Time – 2019 – Gavin Crook My favorite photos of the decade – Matt Payne My Ten Favourite Images of 2019 – Jens Preshaw 2019 in Pictures – Milan Hutera Twelve from 2019 – Tom Whelan My Favorite Photos of 2019 – Jeff Hubbard 2019 Favorites – Rick Holliday Best of the Best 2019 – Richard Valenti Best Landscape and Nature Photos of 2019 – Clint Losee Best of 2019 – My Favorite Images of the Year – Rob Tilley 2019 Year in Review – Greg Russell | Alpenglow Images Best of The Decade Including 2019 – Adrian Klein Best of 2019 – Brian Knott Natural History Photography – Highlights from 2019 – Gabor Ruff Best of 2019 – Jeff Dupuie Top 2019 – Eric Chan Best of 2019 – Greg Clure Twenty Nineteen: In retrospect – Charlotte Gibb Favorite Blog Photos of 2019 – Jim Coda My Favorites 2019 – Beth Young Living Wilderness: Best of 2019 – Kevin Ebi 2019 Favorites – Mike Cleron Best of 2019 – Romain Guy 2019 Favorite Images – Sam Folsom Michael Katz Photography – Michael Katz Twenty Nineteen – Mark Graf 2019 in Review – and Happy New Year” Photography & Travel – brent huntley Top 10 Favorite Images from 2019 – Derrald Farnsworth-Livingston My Photo Highlights of 2019 – Caleb Weston Lagemaat Photography – Best images of 2019 – Jao van de Lagemaat Favorites from 2019 – Kyle Jones A Baker’s Dozen – Mike Christoferson 10 Favorites of 2019: An Amazing Year – Kurt Lawson Top 20 Photographs of 2019 – Year-End-Retrospective – Landscape Photography Reader/David Leland Hyde Favorite Photos of 2019 – Deb Snelson Favorites – 2019 – Daniel Leu Best of 2019 – Steve Cozad Fog from Above in 2019 – Andrew Thomas Favorites of 2019 – Mick McMurray Some Favorites from 2019 – Josh Meier Top 10 Images of 2019 – Stephen L. Kapp Top Ten of 2019 – Holly Davison Best 2019 – Barbara Michalowska The Creative Photographer – Andrew S. Gibson My Favorite Photos of 2019 – Patricia Davidson A Thousand Words – Lucy Autrey Wilson 2019 Top Twelve Photographs – David J Grenier Urban Dinosaurs – Steven M. Bellovin Best of 2019 – Thomas Yackley Carol’s Little World – Best of 2019 – Carol Schiraldi My favourite shots of 2019 – Catalin Marin Top 2019 Photos – Matt Conti Top Ten 2019 – Phyllis Whitman Hunter Favorites from […] Full Article Photography Updates & Announcements Best of 2019 Best of Photos Blog Project
read Concurrency & Multithreading in iOS By feedproxy.google.com Published On :: Tue, 25 Feb 2020 08:00:00 -0500 Concurrency is the notion of multiple things happening at the same time. This is generally achieved either via time-slicing, or truly in parallel if multiple CPU cores are available to the host operating system. We've all experienced a lack of concurrency, most likely in the form of an app freezing up when running a heavy task. UI freezes don't necessarily occur due to the absence of concurrency — they could just be symptoms of buggy software — but software that doesn't take advantage of all the computational power at its disposal is going to create these freezes whenever it needs to do something resource-intensive. If you've profiled an app hanging in this way, you'll probably see a report that looks like this: Anything related to file I/O, data processing, or networking usually warrants a background task (unless you have a very compelling excuse to halt the entire program). There aren't many reasons that these tasks should block your user from interacting with the rest of your application. Consider how much better the user experience of your app could be if instead, the profiler reported something like this: Analyzing an image, processing a document or a piece of audio, or writing a sizeable chunk of data to disk are examples of tasks that could benefit greatly from being delegated to background threads. Let's dig into how we can enforce such behavior into our iOS applications. A Brief History In the olden days, the maximum amount of work per CPU cycle that a computer could perform was determined by the clock speed. As processor designs became more compact, heat and physical constraints started becoming limiting factors for higher clock speeds. Consequentially, chip manufacturers started adding additional processor cores on each chip in order to increase total performance. By increasing the number of cores, a single chip could execute more CPU instructions per cycle without increasing its speed, size, or thermal output. There's just one problem... How can we take advantage of these extra cores? Multithreading. Multithreading is an implementation handled by the host operating system to allow the creation and usage of n amount of threads. Its main purpose is to provide simultaneous execution of two or more parts of a program to utilize all available CPU time. Multithreading is a powerful technique to have in a programmer's toolbelt, but it comes with its own set of responsibilities. A common misconception is that multithreading requires a multi-core processor, but this isn't the case — single-core CPUs are perfectly capable of working on many threads, but we'll take a look in a bit as to why threading is a problem in the first place. Before we dive in, let's look at the nuances of what concurrency and parallelism mean using a simple diagram: In the first situation presented above, we observe that tasks can run concurrently, but not in parallel. This is similar to having multiple conversations in a chatroom, and interleaving (context-switching) between them, but never truly conversing with two people at the same time. This is what we call concurrency. It is the illusion of multiple things happening at the same time when in reality, they're switching very quickly. Concurrency is about dealing with lots of things at the same time. Contrast this with the parallelism model, in which both tasks run simultaneously. Both execution models exhibit multithreading, which is the involvement of multiple threads working towards one common goal. Multithreading is a generalized technique for introducing a combination of concurrency and parallelism into your program. The Burden of Threads A modern multitasking operating system like iOS has hundreds of programs (or processes) running at any given moment. However, most of these programs are either system daemons or background processes that have very low memory footprint, so what is really needed is a way for individual applications to make use of the extra cores available. An application (process) can have many threads (sub-processes) operating on shared memory. Our goal is to be able to control these threads and use them to our advantage. Historically, introducing concurrency to an app has required the creation of one or more threads. Threads are low-level constructs that need to be managed manually. A quick skim through Apple's Threaded Programming Guide is all it takes to see how much complexity threaded code adds to a codebase. In addition to building an app, the developer has to: Responsibly create new threads, adjusting that number dynamically as system conditions change Manage them carefully, deallocating them from memory once they have finished executing Leverage synchronization mechanisms like mutexes, locks, and semaphores to orchestrate resource access between threads, adding even more overhead to application code Mitigate risks associated with coding an application that assumes most of the costs associated with creating and maintaining any threads it uses, and not the host OS This is unfortunate, as it adds enormous levels of complexity and risk without any guarantees of improved performance. Grand Central Dispatch iOS takes an asynchronous approach to solving the concurrency problem of managing threads. Asynchronous functions are common in most programming environments, and are often used to initiate tasks that might take a long time, like reading a file from the disk, or downloading a file from the web. When invoked, an asynchronous function executes some work behind the scenes to start a background task, but returns immediately, regardless of how long the original task might takes to actually complete. A core technology that iOS provides for starting tasks asynchronously is Grand Central Dispatch (or GCD for short). GCD abstracts away thread management code and moves it down to the system level, exposing a light API to define tasks and execute them on an appropriate dispatch queue. GCD takes care of all thread management and scheduling, providing a holistic approach to task management and execution, while also providing better efficiency than traditional threads. Let's take a look at the main components of GCD: What've we got here? Let's start from the left: DispatchQueue.main: The main thread, or the UI thread, is backed by a single serial queue. All tasks are executed in succession, so it is guaranteed that the order of execution is preserved. It is crucial that you ensure all UI updates are designated to this queue, and that you never run any blocking tasks on it. We want to ensure that the app's run loop (called CFRunLoop) is never blocked in order to maintain the highest framerate. Subsequently, the main queue has the highest priority, and any tasks pushed onto this queue will get executed immediately. DispatchQueue.global: A set of global concurrent queues, each of which manage their own pool of threads. Depending on the priority of your task, you can specify which specific queue to execute your task on, although you should resort to using default most of the time. Because tasks on these queues are executed concurrently, it doesn't guarantee preservation of the order in which tasks were queued. Notice how we're not dealing with individual threads anymore? We're dealing with queues which manage a pool of threads internally, and you will shortly see why queues are a much more sustainable approach to multhreading. Serial Queues: The Main Thread As an exercise, let's look at a snippet of code below, which gets fired when the user presses a button in the app. The expensive compute function can be anything. Let's pretend it is post-processing an image stored on the device. import UIKit class ViewController: UIViewController { @IBAction func handleTap(_ sender: Any) { compute() } private func compute() -> Void { // Pretending to post-process a large image. var counter = 0 for _ in 0..<9999999 { counter += 1 } } } At first glance, this may look harmless, but if you run this inside of a real app, the UI will freeze completely until the loop is terminated, which will take... a while. We can prove it by profiling this task in Instruments. You can fire up the Time Profiler module of Instruments by going to Xcode > Open Developer Tool > Instruments in Xcode's menu options. Let's look at the Threads module of the profiler and see where the CPU usage is highest. We can see that the Main Thread is clearly at 100% capacity for almost 5 seconds. That's a non-trivial amount of time to block the UI. Looking at the call tree below the chart, we can see that the Main Thread is at 99.9% capacity for 4.43 seconds! Given that a serial queue works in a FIFO manner, tasks will always complete in the order in which they were inserted. Clearly the compute() method is the culprit here. Can you imagine clicking a button just to have the UI freeze up on you for that long? Background Threads How can we make this better? DispatchQueue.global() to the rescue! This is where background threads come in. Referring to the GCD architecture diagram above, we can see that anything that is not the Main Thread is a background thread in iOS. They can run alongside the Main Thread, leaving it fully unoccupied and ready to handle other UI events like scrolling, responding to user events, animating etc. Let's make a small change to our button click handler above: class ViewController: UIViewController { @IBAction func handleTap(_ sender: Any) { DispatchQueue.global(qos: .userInitiated).async { [unowned self] in self.compute() } } private func compute() -> Void { // Pretending to post-process a large image. var counter = 0 for _ in 0..<9999999 { counter += 1 } } } Unless specified, a snippet of code will usually default to execute on the Main Queue, so in order to force it to execute on a different thread, we'll wrap our compute call inside of an asynchronous closure that gets submitted to the DispatchQueue.global queue. Keep in mind that we aren't really managing threads here. We're submitting tasks (in the form of closures or blocks) to the desired queue with the assumption that it is guaranteed to execute at some point in time. The queue decides which thread to allocate the task to, and it does all the hard work of assessing system requirements and managing the actual threads. This is the magic of Grand Central Dispatch. As the old adage goes, you can't improve what you can't measure. So we measured our truly terrible button click handler, and now that we've improved it, we'll measure it once again to get some concrete data with regards to performance. Looking at the profiler again, it's quite clear to us that this is a huge improvement. The task takes an identical amount of time, but this time, it's happening in the background without locking up the UI. Even though our app is doing the same amount of work, the perceived performance is much better because the user will be free to do other things while the app is processing. You may have noticed that we accessed a global queue of .userInitiated priority. This is an attribute we can use to give our tasks a sense of urgency. If we run the same task on a global queue of and pass it a qos attribute of background , iOS will think it's a utility task, and thus allocate fewer resources to execute it. So, while we don't have control over when our tasks get executed, we do have control over their priority. A Note on Main Thread vs. Main Queue You might be wondering why the Profiler shows "Main Thread" and why we're referring to it as the "Main Queue". If you refer back to the GCD architecture we described above, the Main Queue is solely responsible for managing the Main Thread. The Dispatch Queues section in the Concurrency Programming Guide says that "the main dispatch queue is a globally available serial queue that executes tasks on the application’s main thread. Because it runs on your application’s main thread, the main queue is often used as a key synchronization point for an application." The terms "execute on the Main Thread" and "execute on the Main Queue" can be used interchangeably. Concurrent Queues So far, our tasks have been executed exclusively in a serial manner. DispatchQueue.main is by default a serial queue, and DispatchQueue.global gives you four concurrent dispatch queues depending on the priority parameter you pass in. Let's say we want to take five images, and have our app process them all in parallel on background threads. How would we go about doing that? We can spin up a custom concurrent queue with an identifier of our choosing, and allocate those tasks there. All that's required is the .concurrent attribute during the construction of the queue. class ViewController: UIViewController { let queue = DispatchQueue(label: "com.app.concurrentQueue", attributes: .concurrent) let images: [UIImage] = [UIImage].init(repeating: UIImage(), count: 5) @IBAction func handleTap(_ sender: Any) { for img in images { queue.async { [unowned self] in self.compute(img) } } } private func compute(_ img: UIImage) -> Void { // Pretending to post-process a large image. var counter = 0 for _ in 0..<9999999 { counter += 1 } } } Running that through the profiler, we can see that the app is now spinning up 5 discrete threads to parallelize a for-loop. Parallelization of N Tasks So far, we've looked at pushing computationally expensive task(s) onto background threads without clogging up the UI thread. But what about executing parallel tasks with some restrictions? How can Spotify download multiple songs in parallel, while limiting the maximum number up to 3? We can go about this in a few ways, but this is a good time to explore another important construct in multithreaded programming: semaphores. Semaphores are signaling mechanisms. They are commonly used to control access to a shared resource. Imagine a scenario where a thread can lock access to a certain section of the code while it executes it, and unlocks after it's done to let other threads execute the said section of the code. You would see this type of behavior in database writes and reads, for example. What if you want only one thread writing to a database and preventing any reads during that time? This is a common concern in thread-safety called Readers-writer lock. Semaphores can be used to control concurrency in our app by allowing us to lock n number of threads. let kMaxConcurrent = 3 // Or 1 if you want strictly ordered downloads! let semaphore = DispatchSemaphore(value: kMaxConcurrent) let downloadQueue = DispatchQueue(label: "com.app.downloadQueue", attributes: .concurrent) class ViewController: UIViewController { @IBAction func handleTap(_ sender: Any) { for i in 0..<15 { downloadQueue.async { [unowned self] in // Lock shared resource access semaphore.wait() // Expensive task self.download(i + 1) // Update the UI on the main thread, always! DispatchQueue.main.async { tableView.reloadData() // Release the lock semaphore.signal() } } } } func download(_ songId: Int) -> Void { var counter = 0 // Simulate semi-random download times. for _ in 0..<Int.random(in: 999999...10000000) { counter += songId } } } Notice how we've effectively restricted our download system to limit itself to k number of downloads. The moment one download finishes (or thread is done executing), it decrements the semaphore, allowing the managing queue to spawn another thread and start downloading another song. You can apply a similar pattern to database transactions when dealing with concurrent reads and writes. Semaphores usually aren't necessary for code like the one in our example, but they become more powerful when you need to enforce synchronous behavior whille consuming an asynchronous API. The above could would work just as well with a custom NSOperationQueue with a maxConcurrentOperationCount, but it's a worthwhile tangent regardless. Finer Control with OperationQueue GCD is great when you want to dispatch one-off tasks or closures into a queue in a 'set-it-and-forget-it' fashion, and it provides a very lightweight way of doing so. But what if we want to create a repeatable, structured, long-running task that produces associated state or data? And what if we want to model this chain of operations such that they can be cancelled, suspended and tracked, while still working with a closure-friendly API? Imagine an operation like this: This would be quite cumbersome to achieve with GCD. We want a more modular way of defining a group of tasks while maintaining readability and also exposing a greater amount of control. In this case, we can use Operation objects and queue them onto an OperationQueue, which is a high-level wrapper around DispatchQueue. Let's look at some of the benefits of using these abstractions and what they offer in comparison to the lower-level GCI API: You may want to create dependencies between tasks, and while you could do this via GCD, you're better off defining them concretely as Operation objects, or units of work, and pushing them onto your own queue. This would allow for maximum reusability since you may use the same pattern elsewhere in an application. The Operation and OperationQueue classes have a number of properties that can be observed, using KVO (Key Value Observing). This is another important benefit if you want to monitor the state of an operation or operation queue. Operations can be paused, resumed, and cancelled. Once you dispatch a task using Grand Central Dispatch, you no longer have control or insight into the execution of that task. The Operation API is more flexible in that respect, giving the developer control over the operation's life cycle. OperationQueue allows you to specify the maximum number of queued operations that can run simultaneously, giving you a finer degree of control over the concurrency aspects. The usage of Operation and OperationQueue could fill an entire blog post, but let's look at a quick example of what modeling dependencies looks like. (GCD can also create dependencies, but you're better off dividing up large tasks into a series of composable sub-tasks.) In order to create a chain of operations that depend on one another, we could do something like this: class ViewController: UIViewController { var queue = OperationQueue() var rawImage = UIImage? = nil let imageUrl = URL(string: "https://example.com/portrait.jpg")! @IBOutlet weak var imageView: UIImageView! let downloadOperation = BlockOperation { let image = Downloader.downloadImageWithURL(url: imageUrl) OperationQueue.main.async { self.rawImage = image } } let filterOperation = BlockOperation { let filteredImage = ImgProcessor.addGaussianBlur(self.rawImage) OperationQueue.main.async { self.imageView = filteredImage } } filterOperation.addDependency(downloadOperation) [downloadOperation, filterOperation].forEach { queue.addOperation($0) } } So why not opt for a higher level abstraction and avoid using GCD entirely? While GCD is ideal for inline asynchronous processing, Operation provides a more comprehensive, object-oriented model of computation for encapsulating all of the data around structured, repeatable tasks in an application. Developers should use the highest level of abstraction possible for any given problem, and for scheduling consistent, repeated work, that abstraction is Operation. Other times, it makes more sense to sprinkle in some GCD for one-off tasks or closures that we want to fire. We can mix both OperationQueue and GCD to get the best of both worlds. The Cost of Concurrency DispatchQueue and friends are meant to make it easier for the application developer to execute code concurrently. However, these technologies do not guarantee improvements to the efficiency or responsiveness in an application. It is up to you to use queues in a manner that is both effective and does not impose an undue burden on other resources. For example, it's totally viable to create 10,000 tasks and submit them to a queue, but doing so would allocate a nontrivial amount of memory and introduce a lot of overhead for the allocation and deallocation of operation blocks. This is the opposite of what you want! It's best to profile your app thoroughly to ensure that concurrency is enhancing your app's performance and not degrading it. We've talked about how concurrency comes at a cost in terms of complexity and allocation of system resources, but introducing concurrency also brings a host of other risks like: Deadlock: A situation where a thread locks a critical portion of the code and can halt the application's run loop entirely. In the context of GCD, you should be very careful when using the dispatchQueue.sync { } calls as you could easily get yourself in situations where two synchronous operations can get stuck waiting for each other. Priority Inversion: A condition where a lower priority task blocks a high priority task from executing, which effectively inverts their priorities. GCD allows for different levels of priority on its background queues, so this is quite easily a possibility. Producer-Consumer Problem: A race condition where one thread is creating a data resource while another thread is accessing it. This is a synchronization problem, and can be solved using locks, semaphores, serial queues, or a barrier dispatch if you're using concurrent queues in GCD. ...and many other sorts of locking and data-race conditions that are hard to debug! Thread safety is of the utmost concern when dealing with concurrency. Parting Thoughts + Further Reading If you've made it this far, I applaud you. Hopefully this article gives you a lay of the land when it comes to multithreading techniques on iOS, and how you can use some of them in your app. We didn't get to cover many of the lower-level constructs like locks, mutexes and how they help us achieve synchronization, nor did we get to dive into concrete examples of how concurrency can hurt your app. We'll save those for another day, but you can dig into some additional reading and videos if you're eager to dive deeper. Building Concurrent User Interfaces on iOS (WWDC 2012) Concurrency and Parallelism: Understanding I/O Apple's Official Concurrency Programming Guide Mutexes and Closure Capture in Swift Locks, Thread Safety, and Swift Advanced NSOperations (WWDC 2015) NSHipster: NSOperation Full Article Code
read Windows 8 HTML5 WinRT RSS reader app By feedproxy.google.com Published On :: Fri, 24 Aug 2012 02:33:48 +0000 WinJS is a JavaScript framework for Windows 8, and David Rousset uses it here to create a quick RSS reader. He shows how in a tutorial series. This first article shows the way to build a welcome screen that employs WinJS ListView control. Blend and CSS3 are employed. The second tutorial shows work on the Read the rest... Full Article Front Page HTML Microsoft
read Understanding Climate Change Means Reading Beyond Headlines By feedproxy.google.com Published On :: Sat, 11 Feb 2017 21:18:19 +0000 By David Suzuki The David Suzuki Foundation Seeing terms like “post-truth” and “alternative facts” gain traction in the news convinces me that politicians, media workers and readers could benefit from a refresher course in how science helps us understand the … Continue reading → Full Article Climate & Climate Change Points of View & Opinions Climate Change climate research extreme weather events
read Concurrency & Multithreading in iOS By feedproxy.google.com Published On :: Tue, 25 Feb 2020 08:00:00 -0500 Concurrency is the notion of multiple things happening at the same time. This is generally achieved either via time-slicing, or truly in parallel if multiple CPU cores are available to the host operating system. We've all experienced a lack of concurrency, most likely in the form of an app freezing up when running a heavy task. UI freezes don't necessarily occur due to the absence of concurrency — they could just be symptoms of buggy software — but software that doesn't take advantage of all the computational power at its disposal is going to create these freezes whenever it needs to do something resource-intensive. If you've profiled an app hanging in this way, you'll probably see a report that looks like this: Anything related to file I/O, data processing, or networking usually warrants a background task (unless you have a very compelling excuse to halt the entire program). There aren't many reasons that these tasks should block your user from interacting with the rest of your application. Consider how much better the user experience of your app could be if instead, the profiler reported something like this: Analyzing an image, processing a document or a piece of audio, or writing a sizeable chunk of data to disk are examples of tasks that could benefit greatly from being delegated to background threads. Let's dig into how we can enforce such behavior into our iOS applications. A Brief History In the olden days, the maximum amount of work per CPU cycle that a computer could perform was determined by the clock speed. As processor designs became more compact, heat and physical constraints started becoming limiting factors for higher clock speeds. Consequentially, chip manufacturers started adding additional processor cores on each chip in order to increase total performance. By increasing the number of cores, a single chip could execute more CPU instructions per cycle without increasing its speed, size, or thermal output. There's just one problem... How can we take advantage of these extra cores? Multithreading. Multithreading is an implementation handled by the host operating system to allow the creation and usage of n amount of threads. Its main purpose is to provide simultaneous execution of two or more parts of a program to utilize all available CPU time. Multithreading is a powerful technique to have in a programmer's toolbelt, but it comes with its own set of responsibilities. A common misconception is that multithreading requires a multi-core processor, but this isn't the case — single-core CPUs are perfectly capable of working on many threads, but we'll take a look in a bit as to why threading is a problem in the first place. Before we dive in, let's look at the nuances of what concurrency and parallelism mean using a simple diagram: In the first situation presented above, we observe that tasks can run concurrently, but not in parallel. This is similar to having multiple conversations in a chatroom, and interleaving (context-switching) between them, but never truly conversing with two people at the same time. This is what we call concurrency. It is the illusion of multiple things happening at the same time when in reality, they're switching very quickly. Concurrency is about dealing with lots of things at the same time. Contrast this with the parallelism model, in which both tasks run simultaneously. Both execution models exhibit multithreading, which is the involvement of multiple threads working towards one common goal. Multithreading is a generalized technique for introducing a combination of concurrency and parallelism into your program. The Burden of Threads A modern multitasking operating system like iOS has hundreds of programs (or processes) running at any given moment. However, most of these programs are either system daemons or background processes that have very low memory footprint, so what is really needed is a way for individual applications to make use of the extra cores available. An application (process) can have many threads (sub-processes) operating on shared memory. Our goal is to be able to control these threads and use them to our advantage. Historically, introducing concurrency to an app has required the creation of one or more threads. Threads are low-level constructs that need to be managed manually. A quick skim through Apple's Threaded Programming Guide is all it takes to see how much complexity threaded code adds to a codebase. In addition to building an app, the developer has to: Responsibly create new threads, adjusting that number dynamically as system conditions change Manage them carefully, deallocating them from memory once they have finished executing Leverage synchronization mechanisms like mutexes, locks, and semaphores to orchestrate resource access between threads, adding even more overhead to application code Mitigate risks associated with coding an application that assumes most of the costs associated with creating and maintaining any threads it uses, and not the host OS This is unfortunate, as it adds enormous levels of complexity and risk without any guarantees of improved performance. Grand Central Dispatch iOS takes an asynchronous approach to solving the concurrency problem of managing threads. Asynchronous functions are common in most programming environments, and are often used to initiate tasks that might take a long time, like reading a file from the disk, or downloading a file from the web. When invoked, an asynchronous function executes some work behind the scenes to start a background task, but returns immediately, regardless of how long the original task might takes to actually complete. A core technology that iOS provides for starting tasks asynchronously is Grand Central Dispatch (or GCD for short). GCD abstracts away thread management code and moves it down to the system level, exposing a light API to define tasks and execute them on an appropriate dispatch queue. GCD takes care of all thread management and scheduling, providing a holistic approach to task management and execution, while also providing better efficiency than traditional threads. Let's take a look at the main components of GCD: What've we got here? Let's start from the left: DispatchQueue.main: The main thread, or the UI thread, is backed by a single serial queue. All tasks are executed in succession, so it is guaranteed that the order of execution is preserved. It is crucial that you ensure all UI updates are designated to this queue, and that you never run any blocking tasks on it. We want to ensure that the app's run loop (called CFRunLoop) is never blocked in order to maintain the highest framerate. Subsequently, the main queue has the highest priority, and any tasks pushed onto this queue will get executed immediately. DispatchQueue.global: A set of global concurrent queues, each of which manage their own pool of threads. Depending on the priority of your task, you can specify which specific queue to execute your task on, although you should resort to using default most of the time. Because tasks on these queues are executed concurrently, it doesn't guarantee preservation of the order in which tasks were queued. Notice how we're not dealing with individual threads anymore? We're dealing with queues which manage a pool of threads internally, and you will shortly see why queues are a much more sustainable approach to multhreading. Serial Queues: The Main Thread As an exercise, let's look at a snippet of code below, which gets fired when the user presses a button in the app. The expensive compute function can be anything. Let's pretend it is post-processing an image stored on the device. import UIKit class ViewController: UIViewController { @IBAction func handleTap(_ sender: Any) { compute() } private func compute() -> Void { // Pretending to post-process a large image. var counter = 0 for _ in 0..<9999999 { counter += 1 } } } At first glance, this may look harmless, but if you run this inside of a real app, the UI will freeze completely until the loop is terminated, which will take... a while. We can prove it by profiling this task in Instruments. You can fire up the Time Profiler module of Instruments by going to Xcode > Open Developer Tool > Instruments in Xcode's menu options. Let's look at the Threads module of the profiler and see where the CPU usage is highest. We can see that the Main Thread is clearly at 100% capacity for almost 5 seconds. That's a non-trivial amount of time to block the UI. Looking at the call tree below the chart, we can see that the Main Thread is at 99.9% capacity for 4.43 seconds! Given that a serial queue works in a FIFO manner, tasks will always complete in the order in which they were inserted. Clearly the compute() method is the culprit here. Can you imagine clicking a button just to have the UI freeze up on you for that long? Background Threads How can we make this better? DispatchQueue.global() to the rescue! This is where background threads come in. Referring to the GCD architecture diagram above, we can see that anything that is not the Main Thread is a background thread in iOS. They can run alongside the Main Thread, leaving it fully unoccupied and ready to handle other UI events like scrolling, responding to user events, animating etc. Let's make a small change to our button click handler above: class ViewController: UIViewController { @IBAction func handleTap(_ sender: Any) { DispatchQueue.global(qos: .userInitiated).async { [unowned self] in self.compute() } } private func compute() -> Void { // Pretending to post-process a large image. var counter = 0 for _ in 0..<9999999 { counter += 1 } } } Unless specified, a snippet of code will usually default to execute on the Main Queue, so in order to force it to execute on a different thread, we'll wrap our compute call inside of an asynchronous closure that gets submitted to the DispatchQueue.global queue. Keep in mind that we aren't really managing threads here. We're submitting tasks (in the form of closures or blocks) to the desired queue with the assumption that it is guaranteed to execute at some point in time. The queue decides which thread to allocate the task to, and it does all the hard work of assessing system requirements and managing the actual threads. This is the magic of Grand Central Dispatch. As the old adage goes, you can't improve what you can't measure. So we measured our truly terrible button click handler, and now that we've improved it, we'll measure it once again to get some concrete data with regards to performance. Looking at the profiler again, it's quite clear to us that this is a huge improvement. The task takes an identical amount of time, but this time, it's happening in the background without locking up the UI. Even though our app is doing the same amount of work, the perceived performance is much better because the user will be free to do other things while the app is processing. You may have noticed that we accessed a global queue of .userInitiated priority. This is an attribute we can use to give our tasks a sense of urgency. If we run the same task on a global queue of and pass it a qos attribute of background , iOS will think it's a utility task, and thus allocate fewer resources to execute it. So, while we don't have control over when our tasks get executed, we do have control over their priority. A Note on Main Thread vs. Main Queue You might be wondering why the Profiler shows "Main Thread" and why we're referring to it as the "Main Queue". If you refer back to the GCD architecture we described above, the Main Queue is solely responsible for managing the Main Thread. The Dispatch Queues section in the Concurrency Programming Guide says that "the main dispatch queue is a globally available serial queue that executes tasks on the application’s main thread. Because it runs on your application’s main thread, the main queue is often used as a key synchronization point for an application." The terms "execute on the Main Thread" and "execute on the Main Queue" can be used interchangeably. Concurrent Queues So far, our tasks have been executed exclusively in a serial manner. DispatchQueue.main is by default a serial queue, and DispatchQueue.global gives you four concurrent dispatch queues depending on the priority parameter you pass in. Let's say we want to take five images, and have our app process them all in parallel on background threads. How would we go about doing that? We can spin up a custom concurrent queue with an identifier of our choosing, and allocate those tasks there. All that's required is the .concurrent attribute during the construction of the queue. class ViewController: UIViewController { let queue = DispatchQueue(label: "com.app.concurrentQueue", attributes: .concurrent) let images: [UIImage] = [UIImage].init(repeating: UIImage(), count: 5) @IBAction func handleTap(_ sender: Any) { for img in images { queue.async { [unowned self] in self.compute(img) } } } private func compute(_ img: UIImage) -> Void { // Pretending to post-process a large image. var counter = 0 for _ in 0..<9999999 { counter += 1 } } } Running that through the profiler, we can see that the app is now spinning up 5 discrete threads to parallelize a for-loop. Parallelization of N Tasks So far, we've looked at pushing computationally expensive task(s) onto background threads without clogging up the UI thread. But what about executing parallel tasks with some restrictions? How can Spotify download multiple songs in parallel, while limiting the maximum number up to 3? We can go about this in a few ways, but this is a good time to explore another important construct in multithreaded programming: semaphores. Semaphores are signaling mechanisms. They are commonly used to control access to a shared resource. Imagine a scenario where a thread can lock access to a certain section of the code while it executes it, and unlocks after it's done to let other threads execute the said section of the code. You would see this type of behavior in database writes and reads, for example. What if you want only one thread writing to a database and preventing any reads during that time? This is a common concern in thread-safety called Readers-writer lock. Semaphores can be used to control concurrency in our app by allowing us to lock n number of threads. let kMaxConcurrent = 3 // Or 1 if you want strictly ordered downloads! let semaphore = DispatchSemaphore(value: kMaxConcurrent) let downloadQueue = DispatchQueue(label: "com.app.downloadQueue", attributes: .concurrent) class ViewController: UIViewController { @IBAction func handleTap(_ sender: Any) { for i in 0..<15 { downloadQueue.async { [unowned self] in // Lock shared resource access semaphore.wait() // Expensive task self.download(i + 1) // Update the UI on the main thread, always! DispatchQueue.main.async { tableView.reloadData() // Release the lock semaphore.signal() } } } } func download(_ songId: Int) -> Void { var counter = 0 // Simulate semi-random download times. for _ in 0..<Int.random(in: 999999...10000000) { counter += songId } } } Notice how we've effectively restricted our download system to limit itself to k number of downloads. The moment one download finishes (or thread is done executing), it decrements the semaphore, allowing the managing queue to spawn another thread and start downloading another song. You can apply a similar pattern to database transactions when dealing with concurrent reads and writes. Semaphores usually aren't necessary for code like the one in our example, but they become more powerful when you need to enforce synchronous behavior whille consuming an asynchronous API. The above could would work just as well with a custom NSOperationQueue with a maxConcurrentOperationCount, but it's a worthwhile tangent regardless. Finer Control with OperationQueue GCD is great when you want to dispatch one-off tasks or closures into a queue in a 'set-it-and-forget-it' fashion, and it provides a very lightweight way of doing so. But what if we want to create a repeatable, structured, long-running task that produces associated state or data? And what if we want to model this chain of operations such that they can be cancelled, suspended and tracked, while still working with a closure-friendly API? Imagine an operation like this: This would be quite cumbersome to achieve with GCD. We want a more modular way of defining a group of tasks while maintaining readability and also exposing a greater amount of control. In this case, we can use Operation objects and queue them onto an OperationQueue, which is a high-level wrapper around DispatchQueue. Let's look at some of the benefits of using these abstractions and what they offer in comparison to the lower-level GCI API: You may want to create dependencies between tasks, and while you could do this via GCD, you're better off defining them concretely as Operation objects, or units of work, and pushing them onto your own queue. This would allow for maximum reusability since you may use the same pattern elsewhere in an application. The Operation and OperationQueue classes have a number of properties that can be observed, using KVO (Key Value Observing). This is another important benefit if you want to monitor the state of an operation or operation queue. Operations can be paused, resumed, and cancelled. Once you dispatch a task using Grand Central Dispatch, you no longer have control or insight into the execution of that task. The Operation API is more flexible in that respect, giving the developer control over the operation's life cycle. OperationQueue allows you to specify the maximum number of queued operations that can run simultaneously, giving you a finer degree of control over the concurrency aspects. The usage of Operation and OperationQueue could fill an entire blog post, but let's look at a quick example of what modeling dependencies looks like. (GCD can also create dependencies, but you're better off dividing up large tasks into a series of composable sub-tasks.) In order to create a chain of operations that depend on one another, we could do something like this: class ViewController: UIViewController { var queue = OperationQueue() var rawImage = UIImage? = nil let imageUrl = URL(string: "https://example.com/portrait.jpg")! @IBOutlet weak var imageView: UIImageView! let downloadOperation = BlockOperation { let image = Downloader.downloadImageWithURL(url: imageUrl) OperationQueue.main.async { self.rawImage = image } } let filterOperation = BlockOperation { let filteredImage = ImgProcessor.addGaussianBlur(self.rawImage) OperationQueue.main.async { self.imageView = filteredImage } } filterOperation.addDependency(downloadOperation) [downloadOperation, filterOperation].forEach { queue.addOperation($0) } } So why not opt for a higher level abstraction and avoid using GCD entirely? While GCD is ideal for inline asynchronous processing, Operation provides a more comprehensive, object-oriented model of computation for encapsulating all of the data around structured, repeatable tasks in an application. Developers should use the highest level of abstraction possible for any given problem, and for scheduling consistent, repeated work, that abstraction is Operation. Other times, it makes more sense to sprinkle in some GCD for one-off tasks or closures that we want to fire. We can mix both OperationQueue and GCD to get the best of both worlds. The Cost of Concurrency DispatchQueue and friends are meant to make it easier for the application developer to execute code concurrently. However, these technologies do not guarantee improvements to the efficiency or responsiveness in an application. It is up to you to use queues in a manner that is both effective and does not impose an undue burden on other resources. For example, it's totally viable to create 10,000 tasks and submit them to a queue, but doing so would allocate a nontrivial amount of memory and introduce a lot of overhead for the allocation and deallocation of operation blocks. This is the opposite of what you want! It's best to profile your app thoroughly to ensure that concurrency is enhancing your app's performance and not degrading it. We've talked about how concurrency comes at a cost in terms of complexity and allocation of system resources, but introducing concurrency also brings a host of other risks like: Deadlock: A situation where a thread locks a critical portion of the code and can halt the application's run loop entirely. In the context of GCD, you should be very careful when using the dispatchQueue.sync { } calls as you could easily get yourself in situations where two synchronous operations can get stuck waiting for each other. Priority Inversion: A condition where a lower priority task blocks a high priority task from executing, which effectively inverts their priorities. GCD allows for different levels of priority on its background queues, so this is quite easily a possibility. Producer-Consumer Problem: A race condition where one thread is creating a data resource while another thread is accessing it. This is a synchronization problem, and can be solved using locks, semaphores, serial queues, or a barrier dispatch if you're using concurrent queues in GCD. ...and many other sorts of locking and data-race conditions that are hard to debug! Thread safety is of the utmost concern when dealing with concurrency. Parting Thoughts + Further Reading If you've made it this far, I applaud you. Hopefully this article gives you a lay of the land when it comes to multithreading techniques on iOS, and how you can use some of them in your app. We didn't get to cover many of the lower-level constructs like locks, mutexes and how they help us achieve synchronization, nor did we get to dive into concrete examples of how concurrency can hurt your app. We'll save those for another day, but you can dig into some additional reading and videos if you're eager to dive deeper. Building Concurrent User Interfaces on iOS (WWDC 2012) Concurrency and Parallelism: Understanding I/O Apple's Official Concurrency Programming Guide Mutexes and Closure Capture in Swift Locks, Thread Safety, and Swift Advanced NSOperations (WWDC 2015) NSHipster: NSOperation Full Article Code
read Readability Algorithms Should Be Tools, Not Targets By feedproxy.google.com Published On :: Fri, 01 May 2020 11:30:00 +0000 The web is awash with words. They’re everywhere. On websites, in emails, advertisements, tweets, pop-ups, you name it. More people are publishing more copy than at any point in history. That means a lot of information, and a lot of competition. In recent years a slew of ‘readability’ programs have appeared to help us tidy up the things we write. (Grammarly, Readable, and Yoast are just a handful that come to mind. Full Article
read A reaction-diffusion system to better comprehend the unlockdown: Application of SEIR-type model with diffusion to the spatial spread of COVID-19 in France. (arXiv:2005.03499v1 [q-bio.PE]) By arxiv.org Published On :: A reaction-diffusion model was developed describing the spread of the COVID-19 virus considering the mean daily movement of susceptible, exposed and asymptomatic individuals. The model was calibrated using data on the confirmed infection and death from France as well as their initial spatial distribution. First, the system of partial differential equations is studied, then the basic reproduction number, R0 is derived. Second, numerical simulations, based on a combination of level-set and finite differences, shown the spatial spread of COVID-19 from March 16 to June 16. Finally, scenarios of unlockdown are compared according to variation of distancing, or partially spatial lockdown. Full Article
read Unsupervised Domain Adaptation on Reading Comprehension. (arXiv:1911.06137v4 [cs.CL] UPDATED) By arxiv.org Published On :: Reading comprehension (RC) has been studied in a variety of datasets with the boosted performance brought by deep neural networks. However, the generalization capability of these models across different domains remains unclear. To alleviate this issue, we are going to investigate unsupervised domain adaptation on RC, wherein a model is trained on labeled source domain and to be applied to the target domain with only unlabeled samples. We first show that even with the powerful BERT contextual representation, the performance is still unsatisfactory when the model trained on one dataset is directly applied to another target dataset. To solve this, we provide a novel conditional adversarial self-training method (CASe). Specifically, our approach leverages a BERT model fine-tuned on the source dataset along with the confidence filtering to generate reliable pseudo-labeled samples in the target domain for self-training. On the other hand, it further reduces domain distribution discrepancy through conditional adversarial learning across domains. Extensive experiments show our approach achieves comparable accuracy to supervised models on multiple large-scale benchmark datasets. Full Article
read Deep Learning for Image-based Automatic Dial Meter Reading: Dataset and Baselines. (arXiv:2005.03106v1 [cs.CV]) By arxiv.org Published On :: Smart meters enable remote and automatic electricity, water and gas consumption reading and are being widely deployed in developed countries. Nonetheless, there is still a huge number of non-smart meters in operation. Image-based Automatic Meter Reading (AMR) focuses on dealing with this type of meter readings. We estimate that the Energy Company of Paran'a (Copel), in Brazil, performs more than 850,000 readings of dial meters per month. Those meters are the focus of this work. Our main contributions are: (i) a public real-world dial meter dataset (shared upon request) called UFPR-ADMR; (ii) a deep learning-based recognition baseline on the proposed dataset; and (iii) a detailed error analysis of the main issues present in AMR for dial meters. To the best of our knowledge, this is the first work to introduce deep learning approaches to multi-dial meter reading, and perform experiments on unconstrained images. We achieved a 100.0% F1-score on the dial detection stage with both Faster R-CNN and YOLO, while the recognition rates reached 93.6% for dials and 75.25% for meters using Faster R-CNN (ResNext-101). Full Article
read 5 Best Practices for Breadcrumb Navigation By feedproxy.google.com Published On :: Sat, 21 Sep 2019 13:00:01 +0000 Breadcrumbs are a subtle element of a website that helps improve usability and navigation. They’re a utility that often receives little acknowledgment; however, breadcrumbs can have a large impact and provide a plethora of benefits, such as lowering bounce rate, increasing conversions, and improving user satisfaction. Imagine you’re in a regular grocery store, except […] The post 5 Best Practices for Breadcrumb Navigation appeared first on WebFX Blog. Full Article Web Design
read Regional summer camps hope the pandemic doesn't put activities on pause, but have backup plans ready if it does By www.inlander.com Published On :: Thu, 30 Apr 2020 04:00:00 -0700 [IMAGE-1]After having their school year totally disrupted by the coronavirus pandemic, a return to some semblance of normalcy come summer is all many school-age kids and their families are looking forward to. For many, this anticipation includes annual summer camp traditions, from sleep-away adventures on the lake to fun-filled day camps for arts, learning or team sports.… Full Article Summer Camps
read Key Missteps at the CDC Have Set Back Its Ability to Detect the Potential Spread of Coronavirus By www.inlander.com Published On :: Fri, 28 Feb 2020 06:25:49 -0800 The CDC designed a flawed test for COVID-19, then took weeks to figure out a fix so state and local labs could use it. New York still doesn’t trust the test’s accuracy By Caroline Chen, Marshall Allen, Lexi Churchill and Isaac Arnsdorf Propublica… Full Article News/Nation & World
read These are are our neighbors. These are readers. These are the people we're all trying to save. By www.inlander.com Published On :: Thu, 26 Mar 2020 01:30:00 -0700 How the coronavirus outbreak has upended people's lives across the Inland Northwest The numbers don't lie.… Full Article News/Local News
read As The Rise of Skywalker readies to put a bow on a chapter in Star Wars lore, the franchise's omnipresence has shifted its fandom By www.inlander.com Published On :: Thu, 19 Dec 2019 01:30:00 -0800 With all due respect to Greta Thunberg and Billie Eilish, nobody had a better 2019 than Baby Yoda. The real star of the Disney+ flagship Star Wars series The Mandalorian, the little green puppeteering/CGI marvel (aka "the Child") might be the most adorable creature ever created.… Full Article Film/Film News
read New reads from Emily St. John Mandel, vampy vibes in FX's mockumentary, and more you need to know By www.inlander.com Published On :: Thu, 07 May 2020 01:30:00 -0700 The Buzz Bin VAMPY VIBES… Full Article Culture/Arts & Culture
read Resisting the spread of unwanted code and data By www.freepatentsonline.com Published On :: Tue, 19 May 2015 08:00:00 EDT A method of processing an electronic file by identifying portions of content data in the electronic file and determining if each portion of content data is passive content data having a fixed purpose or active content data having an associated function. If a portion is passive content data, then a determination is made as to whether the portion of passive content data is to be re-generated. If a portion is active content data, then the portion is analyzed to determine whether the portion of active content data is to be re-generated. A re-generated electronic file is then created from the portions of content data which are determined to be re-generated. Full Article
read Apparatus and methods for adaptive thread scheduling on asymmetric multiprocessor By www.freepatentsonline.com Published On :: Tue, 26 May 2015 08:00:00 EDT Techniques for adaptive thread scheduling on a plurality of cores for reducing system energy are described. In one embodiment, a thread scheduler receives leakage current information associated with the plurality of cores. The leakage current information is employed to schedule a thread on one of the plurality of cores to reduce system energy usage. On chip calibration of the sensors is also described. Full Article
read Two-tiered dynamic load balancing using sets of distributed thread pools By www.freepatentsonline.com Published On :: Tue, 26 May 2015 08:00:00 EDT By employing a two-tier load balancing scheme, embodiments of the present invention may reduce the overhead of shared resource management, while increasing the potential aggregate throughput of a thread pool. As a result, the techniques presented herein may lead to increased performance in many computing environments, such as graphics intensive gaming. Full Article
read Low latency variable transfer network communicating variable written to source processing core variable register allocated to destination thread to destination processing core variable register allocated to source thread By www.freepatentsonline.com Published On :: Tue, 28 Apr 2015 08:00:00 EDT A method and circuit arrangement utilize a low latency variable transfer network between the register files of multiple processing cores in a multi-core processor chip to support fine grained parallelism of virtual threads across multiple hardware threads. The communication of a variable over the variable transfer network may be initiated by a move from a local register in a register file of a source processing core to a variable register that is allocated to a destination hardware thread in a destination processing core, so that the destination hardware thread can then move the variable from the variable register to a local register in the destination processing core. Full Article
read Issue policy control within a multi-threaded in-order superscalar processor By www.freepatentsonline.com Published On :: Tue, 12 May 2015 08:00:00 EDT A multi-threaded in-order superscalar processor 2 includes an issue stage 12 including issue circuitry 22, 24 for selecting instructions to be issued to execution units 14, 16 in dependence upon a currently selected issue policy. A plurality of different issue policies are provided by associated different policy circuitry 28, 30, 32 and a selection between which of these instances of the policy circuitry 28, 30, 32 is active is made by policy selecting circuitry 34 in dependence upon detected dynamic behavior of the processor 2. Full Article
read Efficient conditional ALU instruction in read-port limited register file microprocessor By www.freepatentsonline.com Published On :: Tue, 12 May 2015 08:00:00 EDT A microprocessor having performs an architectural instruction that instructs it to perform an operation on first and second source operands to generate a result and to write the result to a destination register only if its architectural condition flags satisfy a condition specified in the architectural instruction. A hardware instruction translator translates the instruction into first and second microinstructions. To execute the first microinstruction, an execution pipeline performs the operation on the source operands to generate the result. To execute the second microinstruction, it writes the destination register with the result generated by the first microinstruction if the architectural condition flags satisfy the condition, and writes the destination register with the current value of the destination register if the architectural condition flags do not satisfy the condition. Full Article
read Shared load-store unit to monitor network activity and external memory transaction status for thread switching By www.freepatentsonline.com Published On :: Tue, 19 May 2015 08:00:00 EDT An array of a plurality of processing elements (PEs) are in a data packet-switched network interconnecting the PEs and memory to enable any of the PEs to access the memory. The network connects the PEs and their local memories to a common controller. The common controller may include a shared load/store (SLS) unit and an array control unit. A shared read may be addressed to an external device via the common controller. The SLS unit can continue activity as if a normal shared read operation has taken place, except that the transactions that have been sent externally may take more cycles to complete than the local shared reads. Hence, a number of transaction-enabled flags may not have been deactivated even though there is no more bus activity. The SLS unit can use this state to indicate to the array control unit that a thread switch may now take place. Full Article
read Hardware assist thread for increasing code parallelism By www.freepatentsonline.com Published On :: Tue, 19 May 2015 08:00:00 EDT Mechanisms are provided for offloading a workload from a main thread to an assist thread. The mechanisms receive, in a fetch unit of a processor of the data processing system, a branch-to-assist-thread instruction of a main thread. The branch-to-assist-thread instruction informs hardware of the processor to look for an already spawned idle thread to be used as an assist thread. Hardware implemented pervasive thread control logic determines if one or more already spawned idle threads are available for use as an assist thread. The hardware implemented pervasive thread control logic selects an idle thread from the one or more already spawned idle threads if it is determined that one or more already spawned idle threads are available for use as an assist thread, to thereby provide the assist thread. In addition, the hardware implemented pervasive thread control logic offloads a portion of a workload of the main thread to the assist thread. Full Article
read System for generating readable and meaningful descriptions of stream processing source code By www.freepatentsonline.com Published On :: Tue, 26 May 2015 08:00:00 EDT An information processing system, computer readable storage medium, and method for automatically generating human readable and meaningful documentation for one or more source code files. A processor of the information processing system receives one or more source code files containing source code artifacts (SCA) and infers semantics therefrom based on predefined rules. The processor, based on the inferred semantics, extracts documentation from another source code file. The extracted documentation and the inferred semantics are used to generate new human readable and meaningful documentation for the SCA, such new documentation being previously missing from the SCA. The generated new documentation is included with the SCA in one or more source code files. Full Article
read Adjustment of threads for execution based on over-utilization of a domain in a multi-processor system by destroying parallizable group of threads in sub-domains By www.freepatentsonline.com Published On :: Tue, 26 May 2015 08:00:00 EDT Embodiments provide various techniques for dynamic adjustment of a number of threads for execution in any domain based on domain utilizations. In a multiprocessor system, the utilization for each domain is monitored. If a utilization of any of these domains changes, then the number of threads for each of the domains determined for execution may also be adjusted to adapt to the change. Full Article
read Methods and systems to identify and reproduce concurrency violations in multi-threaded programs using expressions By www.freepatentsonline.com Published On :: Tue, 15 Sep 2015 08:00:00 EDT Methods and systems to identify and reproduce concurrency bugs in multi-threaded programs are disclosed. An example method disclosed herein includes defining a data type. The data type includes a first predicate associated with a first thread of a multi-threaded program that is associated with a first condition, a second predicate that is associated with a second thread of the multi-threaded program, the second predicate being associated with a second condition, and an expression that defines a relationship between the first predicate and the second predicate. The relationship, when satisfied, causes the concurrency bug to be detected. A concurrency bug detector conforming to the data type is used to detect the concurrency bug in the multi-threaded program. Full Article
read Programmable clock spreading By www.freepatentsonline.com Published On :: Tue, 12 May 2015 08:00:00 EDT An integrated circuit having a programmable clock spreader configured to generate a plurality of controllably skewed clock signals, each applied to a corresponding region within the integrated circuit with circuitry configured to be triggered off the applied clock signal. The programmable clock spreader is designed to enable customization of the current-demand characteristics exhibited by the integrated circuit, e.g., based on the circuit's spectral impedance profile, to cause transient voltage droops in the power-supply network of the integrated circuit to be sufficiently small to ensure proper and reliable operation of the integrated circuit. Full Article
read Method, system, and computer readable medium for creating clusters of text in an electronic document By www.freepatentsonline.com Published On :: Tue, 21 Jul 2015 08:00:00 EDT Disclosed herein are systems and methods for navigating electronic texts. According to an aspect, a method may include determining text subgroups within an electronic text. The method may also include selecting a text seed within one of the text subgroups. Further, the method may include determining a similarity relationship between the text seed and one or more adjacent text subgroups that do not include the selected text seed. The method may also include associating the text seed with the one or more adjacent text subgroups based on the similarity relationship to create a text cluster. Full Article
read Data processing apparatus, data processing method of data processing apparatus, and computer-readable memory medium storing program therein By www.freepatentsonline.com Published On :: Tue, 26 May 2015 08:00:00 EDT To freely establish a peripheral equipment selection operating environment of excellent operability which can remarkably reduce an operation burden which is applied until construction information of selectable peripheral equipment can be confirmed and can easily confirm the construction information of the selectable peripheral equipment by everyone by a simple operating instruction, a CPU obtains construction information of a printer that is being selected and default setting on the basis of a selection instructing state relative to a selectable printer candidate on a network and allows them to be caption-displayed at a position near the position indicated by a cursor on a printer selection picture plane displayed on a CRT. Full Article
read Printing device, mobile terminal, and computer readable recording medium for the same By www.freepatentsonline.com Published On :: Tue, 02 Jun 2015 08:00:00 EDT A printing device includes a printing device side wireless communication unit configured to execute wireless communication with a mobile terminal, an operation acquisition unit configured to acquire user operation thereof, and a processor. The processor is configured to acquire operation data which is generated as the operation acquisition unit acquires a user operation, acquire establishment data which is generated as the printing device side wireless communication unit establishes a wireless communication with the mobile terminal, and issue a request control to control the printing device side wireless communication unit to transmit request data requesting the mobile terminal to transmit print data necessary for printing, via the wireless communication, when the establishment data is acquired, the request control being issued in accordance with the operation data as acquired. Full Article
read Radio paging selective receiver with display for notifying presence of unread message based on time of receipt By www.freepatentsonline.com Published On :: Tue, 19 Jan 1999 08:00:00 EST A radio paging selective receiver determines that a received message is unread based on the time difference between the message reception time and the current time being larger that some predetermined value of time, and the paging selective receiver provides an indication of the unread message by displaying the reception time of the unread message in a second fashion which is visibly different from a first fashion normally used to display the current time. Full Article
read Methods, systems, apparatuses, and computer-readable media for waking a SLIMbus without toggle signal By www.freepatentsonline.com Published On :: Tue, 26 May 2015 08:00:00 EDT Arrangements for restarting data transmission on a serial low-power inter-chip media bus (SLIMbus) are presented. A clock signal may be provided in an active mode to a component communicatively coupled with the SLIMbus. Immediately prior to the clock signal in the active mode being provided, the clock signal may have been in a paused mode. While the clock signal was in the paused mode at least until the clock signal is provided in the active mode, the data line may have been inactive (e.g., a toggle on the data line may not have been present). Frame synchronization data for a frame may be transmitted. The frame synchronization data for the frame, as received by the component, may not match expected frame synchronization data. Payload data may be transmitted as part of the frame to the component, wherein the payload data is expected to be read properly by the component. Full Article
read Methods, systems, and computer readable media for monitored application of mechanical force to samples using acoustic energy and mechanical parameter value extraction using mechanical response models By www.freepatentsonline.com Published On :: Tue, 26 May 2015 08:00:00 EDT Methods, systems, and computer readable media for monitored application of mechanical force to samples using acoustic energy and mechanical parameter value extraction using mechanical response models can be used for determining mechanical property parameters of a sample. An exemplary method includes applying acoustic energy to a sample to apply a mechanical force to the sample, measuring a response by the sample during the application of the acoustic energy, measuring a recovery response of the sample following cessation of the application of the acoustic energy, and determining a value for at least one additional mechanical property parameter of the sample based on the response measured during application of the acoustic energy and the recovery response measured following cessation of the application of acoustic energy. Full Article
read Methods, systems, and computer readable media for simulating realistic movement of user equipment in a long term evolution (LTE) network By www.freepatentsonline.com Published On :: Tue, 01 Sep 2015 08:00:00 EDT Methods, systems, and computer readable media for simulating realistic movement of user equipment in an LTE network are disclosed. According to one method, a logical topology of a long term evolution (LTE) access network is defined that includes defining connections between one or more eNodeBs (eNBs). A physical topology of the LTE access network is defined that includes defining locations of the eNBs and sectors, where the physical network topology is mapped to the logical network topology. One or more problem areas are defined within the physical network topology, where the one or more problem areas include locations where signal quality is degraded. One or more paths are defined through the physical network topology. A traffic profile for a user equipment (UE) device is defined. A plurality of messages is generated for simulating the movement of a UE device along a path through the physical network topology. Full Article
read Reader fabrication method employing developable bottom anti-reflective coating By www.freepatentsonline.com Published On :: Tue, 19 May 2015 08:00:00 EDT Disclosed are methods for making read sensors using developable bottom anti-reflective coating and amorphous carbon (a-C) layers as junction milling masks. The methods described herein provide an excellent chemical mechanical polishing or planarization (CMP) stop, and improve control in reader critical physical parameters, shield to shield spacing (SSS) and free layer track width (FLTW). Full Article
read Information processing apparatus, computer-readable recording medium, and control method By www.freepatentsonline.com Published On :: Tue, 23 Jun 2015 08:00:00 EDT An abnormality detection unit provided in at least one node among a plurality of nodes included in an information processing apparatus detects abnormality in a data transmission path of data transmission using a shared memory area sharable in a single node and other node, which is included in the storage unit provided in the single node or other nodes. An error information generation unit provided in the single node generates error information, based on the abnormality detected by the abnormality detection unit, and generates an interrupt with respect to a processor within a self node. The processor provided in the single node performs recovery processing, based on the error information according to the interrupt. Full Article
read Treadle-drive eccentric wheel transmission wheel series with periodically varied speed ratio and having inward packing auxiliary wheel By www.freepatentsonline.com Published On :: Tue, 31 Mar 2015 08:00:00 EDT In the present invention, one or both of an active wheel or a passive wheel is composed of an eccentric transmission wheel and is combined with a synchronous transmission belt for forming an eccentric wheel transmission wheel series, so that when the feet input forces at different angles from the treadle shafts of the treadles to an active wheel shaft combined on the active wheel through cranks, the active wheel forms different transmission speed ratios relative to the passive wheel according to the treadle angle, and random inward packing is performed to the transmission belt (100) of the engage end of the eccentric passive wheel (413) during the transmission for stabling the operation. Full Article
read Treadle-drive eccentric wheel transmission wheel series with periodically varied speed ratio By www.freepatentsonline.com Published On :: Tue, 26 May 2015 08:00:00 EDT The present invention is structured by using one or both of an active wheel and a passive wheel being composed of an eccentric transmission wheel and being combined with a synchronous transmission belt for forming an eccentric wheel transmission wheel series, so that in the reciprocal treadle performed by the human's feet, when the feet input forces at different angles from the treadle shafts of the treadles to an active wheel shaft combined on the active wheel through cranks, the active wheel forms different transmission speed ratios relative to the passive wheel according to the treadle angle. Full Article
read Method for making threaded tube By www.freepatentsonline.com Published On :: Tue, 22 Jan 2013 08:00:00 EST The invention includes a method, and a component made according to the method having at least one thread pattern formed thereon from a stamping method. The invention includes a tubular member comprising a body having a wall formed from a wrapped sheet of stock to define an interior wall and an exterior wall, a seam in the wall defining a first and second end of the wrapped sheet of stock, and a thread pattern stamped on the exterior wall. The method comprises the steps of forming a blank from sheet of stock having a first surface. A thread pattern is formed onto the first surface while in a substantially sheet-like form. A bending operation then forms the sheet stock into a tubular member such that the thread pattern, located on the tube's external surface, is substantially aligned about its circumference. Full Article
read Method and device for manufacturing fastenings or fasteners with radial outer contours, especially screws or threaded bolts By www.freepatentsonline.com Published On :: Tue, 26 Feb 2013 08:00:00 EST A method of manufacturing fastenings or fasteners with radial outer contours, especially screws or threaded bolts, made of solid metal is performed by a device. The method manufactures the fastenings or fasteners preferably on a multi-stage press. Several recesses running in an axial direction at a fixed radial distance are formed in the shank-shaped section of a blank. The prefabricated blank with the recesses is inserted into a multi-part split mold within a multi-stage press, whose die stocks have an inner profiling forming the outer contour, and are opened in the starting position, that at the places where the die stocks are opened, there are the recesses. During the closing movement of the die stocks, at least one radial outer contour is pressed on the shank-shaped section of the blank by radial action of forces, with the recesses preventing material from getting between the die stocks during the pressing process. Full Article
read Nut, female thread machining device and female thread machining method By www.freepatentsonline.com Published On :: Tue, 16 Apr 2013 08:00:00 EDT There is provided a nut having a thread portion having a female thread, a metallic plate portion having a base segment, and a hardness gradient portion provided between the thread portion and the metallic plate portion. The thread portion, metallic plate portion and the hardness gradient portion are monolithic each other, a metallographic structure of the metallic plate portion differs from a metallographic structure of the thread portion and a hardness of the hardness gradient portion is lower than a hardness of the thread portion and lowers from the thread portion toward the metallic plate portion. Full Article
read Thread forming tap By www.freepatentsonline.com Published On :: Tue, 23 Apr 2013 08:00:00 EDT A thread forming tap having a complete thread portion, formed with a predetermined back taper with that decreases rotational torque during tapping work, which reduces load acting on first complete protruding portions, formed at an extreme leading end of the complete thread portion, and thereby suppressing degradation in service life of the tool due to wear. Full Article
read Threadrolling machine with device for unloading workpieces By www.freepatentsonline.com Published On :: Tue, 25 Jun 2013 08:00:00 EDT A rolling machine comprises parallel guides delimiting a workpiece conveying channel extending from a plurality of per se known rolling tools to a machined workpiece unloading arrangement, wherein, upstream of the workpiece unloading arrangement, one of the channel delimiting guides is operatively coupled to a structural element swingably supported by a pivot pin, the structural element being integral with the piston rod of a cylinder-piston unit slidably driving the structural element together with the channel delimiting guide, thereby providing a side unloading opening for the workpiece. Full Article
read Twisted threaded reinforcing bar By www.freepatentsonline.com Published On :: Tue, 17 Sep 2013 08:00:00 EDT Techniques for reinforcing concrete using rebar are disclosed. Some example embodiments may include prestressed concrete structures reinforced by twisted, threaded reinforcing bars. An example reinforcing bar for a prestressed concrete structure may include an elongated, generally cylindrical rod; an external thread disposed on the generally cylindrical rod, the external thread formed from an elongated, generally nonlinear channel wrapped about a radial surface of the generally cylindrical rod in a generally helical fashion. A base portion of the nonlinear channel may be disposed substantially against the radial surface of the generally cylindrical rod and/or an upstanding portion of the nonlinear channel may extend generally orthogonally from the radial surface of the generally cylindrical rod. Full Article
read Method for manufacturing a thread-forming screw By www.freepatentsonline.com Published On :: Tue, 15 Apr 2014 08:00:00 EDT A method for manufacturing a thread-forming screw having a shank and a thread formed in one piece with the shank and region-wise circumferentially arranged on the shank, is disclosed. After the formation of the thread on the shank, a plurality of recesses is subsequently stamped into the thread. Then, a plurality of compact cutting elements is welded into the recesses in the thread, where the cutting elements are made of a hard material and have a hardness greater than the hardness of the thread. Additionally, a stamping device for carrying out the method is also disclosed. Full Article
read Apparatuses and methods for rolling angled threads By www.freepatentsonline.com Published On :: Tue, 20 May 2014 08:00:00 EDT In various embodiments, a tapered thread roll, a set of tapered thread rolls, a thread rolling tool, and a thread rolling method are provided for rolling angled or tapered threads onto a workpiece to create a threaded workpiece. In at least one embodiment, the threaded workpiece may comprise a polished rod or a polished rod precursor as specified by the American Petroleum Institute for use in an oil-field sucker-type pump, for example. Full Article
read Cutting insert for threading By www.freepatentsonline.com Published On :: Tue, 03 Jun 2014 08:00:00 EDT A threading cutting insert achieves high shape accuracy of a screw to be processed, and saves on manufacturing cost. Therefore in the threading cutting insert, a plurality of tooth-shaped cutting edges are formed in a cross ridge line portion between a rake face and flanks formed in a cutting side face, wherein the plurality of mountain-shaped cutting edges provides at least one finishing cutting edge for transferring a shape of a screw, and at least one roughing cutting edge formed in a tooth shape smaller than that of the finishing cutting edge. A flank of the finishing cutting edge includes a first flank, and a second flank having a clearance angle larger than that of the first flank, wherein the finishing cutting edge, the first flank, and the second flank are sequentially provided in that order from the rake surface in a direction of a lower surface of the insert. Full Article
read Releasable thread chaser By www.freepatentsonline.com Published On :: Tue, 05 Aug 2014 08:00:00 EDT A pivoting split thread chasing die with fastener is disclosed. In one example, the fastener can include a retainer that captures the fastener within one half of the thread chasing die. The thread chasing die may make it easier for a user to refurbish threads of large bolts and studs. Full Article
read Tool for repairing cross-threading and other damage in threaded blind holes By www.freepatentsonline.com Published On :: Tue, 02 Sep 2014 08:00:00 EDT A slotted inverse tap, compressible for insertion past damaged entry threads in blind holes. (FIG. 1 thru FIG. 5) The tool can be made to smaller sizes than that of prior art. An elongate slot (23) proceeds through a first threaded end (21), then well into a reduced diameter cylindrical body (25). After insertion to the hole bottom, a tabbed shim (28) is inserted to the slot from its side, then pressed down until stopped. The shim (28) enforces mating engagement with undamaged internal threads. A second end of hex and/or squared or other configuration facilitates use of a tap wrench or other tool for rotational extraction. Damaged threads are reformed/re-cut upon rotational withdrawal. Full Article
read Screw method for forming a screw thread By www.freepatentsonline.com Published On :: Tue, 02 Dec 2014 08:00:00 EST In the case of a screw having at least one thread (26) that is formed by a rolling process, especially a flat-die rolling process, whereby the thread (26) consists of two ridges of material (28a, 28b) which are shaped from a blank (12) by means of cold-forming during the rolling process in such a way that the thread has a closing crease (32) where the ridges of material (28a, 28b) meet each other, it is provided that the closing crease (32) is situated in the area of a flank (30a, 30b) of the thread (26). Full Article