ios Profiting by the Biosphere Rules By hbr.org Published On :: Thu, 15 Apr 2010 20:32:08 -0500 Gregory Unruh, director and professor of the Lincoln Center for Ethics in Global Management at the Thunderbird School. Full Article
ios Brian Grazer on the Power of Curiosity By hbr.org Published On :: Thu, 23 Apr 2015 12:40:11 -0500 The Oscar-winning producer explains why a passion for learning--about other people and pursuits--has been the key to his success. Full Article
ios The Power of Curiosity By hbr.org Published On :: Tue, 09 Oct 2018 16:40:50 -0500 Francesca Gino, a professor at Harvard Business School, shares a compelling business case for curiosity. Her research shows allowing employees to exercise their curiosity can lead to fewer conflicts and better outcomes. However, even managers who value inquisitive thinking often discourage curiosity in the workplace because they fear it's inefficient and unproductive. Gino offers several ways that leaders can instead model, cultivate, and even recruit for curiosity. Gino is the author of the HBR article "The Business Case for Curiosity." Full Article
ios Senior Designer: Moon Studios By feedproxy.google.com Published On :: 2020-05-05 Moon Studios - award-winning creators of Ori and the Blind Forest and Ori and the Will of the Wisps - are looking for Senior Game Designers. After redefining the Metroidvania genre with the Ori series, our next goal is to revolutionize the ARPG genre. Join our family, help us create some of the best games the industry has ever seen and work with some of the most talented individuals in the world! Please note that Moon Studios is a distributed development studio: Everyone at Moon works remotely and we accept job applications from participants all over the world! We're looking for: Individuals who can work autonomously - We don't count your hours, we won't babysit you. Your job is to enchant us with your raw talent and professionalism! Individuals who want to have a voice - Unhappy with the current state of the industry? Every person at Moon Studios has a voice and can help shape the games we're making. Individuals who believe in collaboration - We work together as a team, as a family, to challenge the status quo and raise the bar! Individuals who want to grow - Contrary to other AAA studios, our goal is to remain a comparatively small studio made of elite talent. We strive to only hire the absolute best talent in the industry and encourage them to further grow their skills with us. Tired of being the smartest gal or guy in the room? Join our community, inspire us, and be inspired! Since Moon Studios is a distributed game development studio, we work with talent from all around the world. To keep things personal and foster communication, we hold annual Team Retreats to touch base and figure out what challenges we want to tackle next. Drinks, dinners, flights, accommodations, and an overall swell time all included! We want you to be: A Designer by heart: While playing games, you just intuitively know how to improve upon the weaker aspects of a title. You indulge in analyzing things that don’t quite work and love to think about how this or that aspect could have been perfected. Experienced: You've worked in the industry before and know the ropes. You're looking for a no-bullshit studio to call home where you can voice your opinions and work with the best talent this industry has to offer! Passionate: You’ve played and studied RPGs your whole life and you still can’t get enough of them. You have a love for all things Diablo, Zelda, Dark Souls and other games in the genre. You’d love the opportunity to work on an RPG that dares to innovate and go far beyond what the genre has offered players thus far. Open to new challenges: We're constantly striving to raise the bar here at Moon Studios. We're not looking for specialists in one field or genre, we want you to be open to help wherever help is needed and be the well-rounded multi-talented creative genius you are. A cool dude / dudette: Life is too short to deal with Prima Donnas: You're cool, open-minded and always willing to learn new things. Simply send your Resume + Portfolio to jobs@moongamestudios.com Moon Studios is an independent video game development studio, founded in 2010 by Thomas Mahler (former Cinematic Artist at Blizzard Entertainment) and Gennadiy Korol (former Senior Graphics Engineer at Animation Lab). The company focuses on highly refined gameplay mechanics and prides itself on an excessive ‘iterative polish’ process. Moon Studios is a distributed development house: Our team members are spread throughout the world, allowing Moon to work with the best and most talented people in the games industry. In 2014, Moon announced it was working on Ori and the Blind Forest, which was released on March 11th, 2015 for Xbox One and PC via Steam, quickly followed up by the Ori and the Blind Forest Definitive Edition a year later. Ori and the Blind Forest received overwhelmingly high praise, a fantastic debut for Moon Studios. Ori and the Will of the Wisps is slated to be released on March 11th, 2020. Moon Studios is simultaneously also working on a yet uannounced title. Full Article
ios Senior Character TD: Moon Studios By feedproxy.google.com Published On :: 2020-05-05 Moon Studios - award-winning creators of Ori and the Blind Forest and Ori and the Will of the Wisps - are looking for Senior Character TD's. After redefining the Metroidvania genre with the Ori series, our next goal is to revolutionize the ARPG genre. Join our family, help us create some of the best games the industry has ever seen and work with some of the most talented individuals in the world! Please note that Moon Studios is a distributed development studio: Everyone at Moon works remotely and we accept job applications from participants all over the world! We're looking for: Individuals who can work autonomously - We don't count your hours, we won't babysit you. Your job is to enchant us with your raw talent and professionalism! Individuals who want to have a voice - Unhappy with the current state of the industry? Every person at Moon Studios has a voice and can help shape the games we're making. Individuals who believe in collaboration - We work together as a team, as a family, to challenge the status quo and raise the bar! Individuals who want to grow - Contrary to other AAA studios, our goal is to remain a comparatively small studio made of elite talent. We strive to only hire the absolute best talent in the industry and encourage them to further grow their skills with us. Tired of being the smartest gal or guy in the room? Join our community, inspire us, and be inspired! Since Moon Studios is a distributed game development studio, we work with talent from all around the world. To keep things personal and foster communication, we hold annual Team Retreats to touch base and figure out what challenges we want to tackle next. Drinks, dinners, flights, accommodations, and an overall swell time all included! Reach out to us if you... Are a top notch Character TD who has production experience in creating character rigs and developing tools, pipelines, etc. Are extremely experienced with rigging in Maya and developing tools with Python. Have a solid understanding of animation principles and processes, know what animators need and can help create friendly and intuitive rigs and tools. Have at least a basic knowledge of game production, game engines and real-time limitations. Are knowledgeable in more than just rigging. We want as many multi-talented creative geniuses as possible in our studio! Simply send your Resume + Portfolio to jobs@moongamestudios.com Moon Studios is an independent video game development studio, founded in 2010 by Thomas Mahler (former Cinematic Artist at Blizzard Entertainment) and Gennadiy Korol (former Senior Graphics Engineer at Animation Lab). The company focuses on highly refined gameplay mechanics and prides itself on an excessive ‘iterative polish’ process. Moon Studios is a distributed development house: Our team members are spread throughout the world, allowing Moon to work with the best and most talented people in the games industry. In 2014, Moon announced it was working on Ori and the Blind Forest, which was released on March 11th, 2015 for Xbox One and PC via Steam, quickly followed up by the Ori and the Blind Forest Definitive Edition a year later. Ori and the Blind Forest received overwhelmingly high praise, a fantastic debut for Moon Studios. Ori and the Will of the Wisps is slated to be released on March 11th, 2020. Moon Studios is simultaneously also working on a yet uannounced title. Full Article
ios NRIs/PIOs can now send funds online for Modi govt's flagship schemes like Swachch Bharat, Clean Ganga By economictimes.indiatimes.com Published On :: 2016-07-31T17:26:03+05:30 Swaraj said this while chairing a meeting of India Development Foundation of Overseas Indians, a trust established to supplement development efforts. Full Article
ios University of Washington biostatistician unhappy with ever-changing University of Washington coronavirus projections By statmodeling.stat.columbia.edu Published On :: Tue, 05 May 2020 20:56:10 +0000 The University of Washington in Seattle is a big place. It includes the Institute for Health Metrics and Evaluation (IHME), which has produced a widely-circulated and widely-criticized coronavirus model. As we’ve discussed, the IHME model is essentially a curve-fitting exercise that makes projections using the second derivative of the time trend on the log scale. […] Full Article Miscellaneous Statistics Public Health Sociology
ios The idiosyncrasies of streams: local variability mitigates vulnerability of trout to changing conditions By www.fs.fed.us Published On :: Wed., 30 Nov 2016 12:00:00 PST Land use and climate change are two key factors with the potential to affect stream conditions and fish habitat. Since the 1950s, Washington and Oregon have required forest practices designed to mitigate the effects of timber harvest on streams and fish. Full Article
ios CSSplay - CSS responsive video aspect ratios By www.cssplay.co.uk Published On :: 2015-02-06 A tutorial for CSS only responsive video (iframe) aspect ratios.) Full Article
ios AudioSweets Make New PopCore Volume Available By www.allaccess.com Published On :: Mon, 04 May 2020 07:27:47 -0700 AUDIOSWEETS has released the latest in its imaging POPCORE series, POPCORE VOL. 14 from ASX. POPCORE VOL. 14 features 220 imaging elements with 11 categories in the update including Artist … more Full Article
ios Alerta de medios de la AHA: El COVID-19 genera preguntas sobre un mayor riesgo para las personas con ECV y los sobrevivientes de accidentes cerebrovasculares By newsroom.heart.org Published On :: Fri, 03 Apr 2020 14:50:00 GMT Sala de prensa sobre el COVID-19 de la AHA DALLAS, 3 de abril del 2020 – El COVID-19 está generando preguntas y preocupaciones generalizadas sobre el mayor riesgo que implica para aquellos con cardiopatías y sobrevivientes de accidentes... Full Article
ios Lecciones online gratuitas para todas las comunidades religiosas By newsroom.heart.org Published On :: Wed, 08 Apr 2020 21:50:00 GMT DALLAS, 8 de abril del 2020— Aproximadamente 120 millones de personas en Estados Unidos tienen una o más enfermedades cardiovasculares, las cuales pueden aumentar el riesgo de complicaciones producto del COVID-19. Además, aquellos con hipertensión,... Full Article
ios Universitarios reciben becas para ayudar a abordar las disparidades de salud By newsroom.heart.org Published On :: Wed, 06 May 2020 02:00:00 GMT DALLAS, 5 de mayo de 2020 — Diez estudiantes universitarios recibirán becas de US$10,000 dólares de la American Heart Association que servirán para favorecer el trabajo que desempeñan estos alumnos para cerrar las brechas de disparidad en el ámbito de la... Full Article
ios Concurrency & Multithreading in iOS By feedproxy.google.com Published On :: Tue, 25 Feb 2020 08:00:00 -0500 Concurrency is the notion of multiple things happening at the same time. This is generally achieved either via time-slicing, or truly in parallel if multiple CPU cores are available to the host operating system. We've all experienced a lack of concurrency, most likely in the form of an app freezing up when running a heavy task. UI freezes don't necessarily occur due to the absence of concurrency — they could just be symptoms of buggy software — but software that doesn't take advantage of all the computational power at its disposal is going to create these freezes whenever it needs to do something resource-intensive. If you've profiled an app hanging in this way, you'll probably see a report that looks like this: Anything related to file I/O, data processing, or networking usually warrants a background task (unless you have a very compelling excuse to halt the entire program). There aren't many reasons that these tasks should block your user from interacting with the rest of your application. Consider how much better the user experience of your app could be if instead, the profiler reported something like this: Analyzing an image, processing a document or a piece of audio, or writing a sizeable chunk of data to disk are examples of tasks that could benefit greatly from being delegated to background threads. Let's dig into how we can enforce such behavior into our iOS applications. A Brief History In the olden days, the maximum amount of work per CPU cycle that a computer could perform was determined by the clock speed. As processor designs became more compact, heat and physical constraints started becoming limiting factors for higher clock speeds. Consequentially, chip manufacturers started adding additional processor cores on each chip in order to increase total performance. By increasing the number of cores, a single chip could execute more CPU instructions per cycle without increasing its speed, size, or thermal output. There's just one problem... How can we take advantage of these extra cores? Multithreading. Multithreading is an implementation handled by the host operating system to allow the creation and usage of n amount of threads. Its main purpose is to provide simultaneous execution of two or more parts of a program to utilize all available CPU time. Multithreading is a powerful technique to have in a programmer's toolbelt, but it comes with its own set of responsibilities. A common misconception is that multithreading requires a multi-core processor, but this isn't the case — single-core CPUs are perfectly capable of working on many threads, but we'll take a look in a bit as to why threading is a problem in the first place. Before we dive in, let's look at the nuances of what concurrency and parallelism mean using a simple diagram: In the first situation presented above, we observe that tasks can run concurrently, but not in parallel. This is similar to having multiple conversations in a chatroom, and interleaving (context-switching) between them, but never truly conversing with two people at the same time. This is what we call concurrency. It is the illusion of multiple things happening at the same time when in reality, they're switching very quickly. Concurrency is about dealing with lots of things at the same time. Contrast this with the parallelism model, in which both tasks run simultaneously. Both execution models exhibit multithreading, which is the involvement of multiple threads working towards one common goal. Multithreading is a generalized technique for introducing a combination of concurrency and parallelism into your program. The Burden of Threads A modern multitasking operating system like iOS has hundreds of programs (or processes) running at any given moment. However, most of these programs are either system daemons or background processes that have very low memory footprint, so what is really needed is a way for individual applications to make use of the extra cores available. An application (process) can have many threads (sub-processes) operating on shared memory. Our goal is to be able to control these threads and use them to our advantage. Historically, introducing concurrency to an app has required the creation of one or more threads. Threads are low-level constructs that need to be managed manually. A quick skim through Apple's Threaded Programming Guide is all it takes to see how much complexity threaded code adds to a codebase. In addition to building an app, the developer has to: Responsibly create new threads, adjusting that number dynamically as system conditions change Manage them carefully, deallocating them from memory once they have finished executing Leverage synchronization mechanisms like mutexes, locks, and semaphores to orchestrate resource access between threads, adding even more overhead to application code Mitigate risks associated with coding an application that assumes most of the costs associated with creating and maintaining any threads it uses, and not the host OS This is unfortunate, as it adds enormous levels of complexity and risk without any guarantees of improved performance. Grand Central Dispatch iOS takes an asynchronous approach to solving the concurrency problem of managing threads. Asynchronous functions are common in most programming environments, and are often used to initiate tasks that might take a long time, like reading a file from the disk, or downloading a file from the web. When invoked, an asynchronous function executes some work behind the scenes to start a background task, but returns immediately, regardless of how long the original task might takes to actually complete. A core technology that iOS provides for starting tasks asynchronously is Grand Central Dispatch (or GCD for short). GCD abstracts away thread management code and moves it down to the system level, exposing a light API to define tasks and execute them on an appropriate dispatch queue. GCD takes care of all thread management and scheduling, providing a holistic approach to task management and execution, while also providing better efficiency than traditional threads. Let's take a look at the main components of GCD: What've we got here? Let's start from the left: DispatchQueue.main: The main thread, or the UI thread, is backed by a single serial queue. All tasks are executed in succession, so it is guaranteed that the order of execution is preserved. It is crucial that you ensure all UI updates are designated to this queue, and that you never run any blocking tasks on it. We want to ensure that the app's run loop (called CFRunLoop) is never blocked in order to maintain the highest framerate. Subsequently, the main queue has the highest priority, and any tasks pushed onto this queue will get executed immediately. DispatchQueue.global: A set of global concurrent queues, each of which manage their own pool of threads. Depending on the priority of your task, you can specify which specific queue to execute your task on, although you should resort to using default most of the time. Because tasks on these queues are executed concurrently, it doesn't guarantee preservation of the order in which tasks were queued. Notice how we're not dealing with individual threads anymore? We're dealing with queues which manage a pool of threads internally, and you will shortly see why queues are a much more sustainable approach to multhreading. Serial Queues: The Main Thread As an exercise, let's look at a snippet of code below, which gets fired when the user presses a button in the app. The expensive compute function can be anything. Let's pretend it is post-processing an image stored on the device. import UIKit class ViewController: UIViewController { @IBAction func handleTap(_ sender: Any) { compute() } private func compute() -> Void { // Pretending to post-process a large image. var counter = 0 for _ in 0..<9999999 { counter += 1 } } } At first glance, this may look harmless, but if you run this inside of a real app, the UI will freeze completely until the loop is terminated, which will take... a while. We can prove it by profiling this task in Instruments. You can fire up the Time Profiler module of Instruments by going to Xcode > Open Developer Tool > Instruments in Xcode's menu options. Let's look at the Threads module of the profiler and see where the CPU usage is highest. We can see that the Main Thread is clearly at 100% capacity for almost 5 seconds. That's a non-trivial amount of time to block the UI. Looking at the call tree below the chart, we can see that the Main Thread is at 99.9% capacity for 4.43 seconds! Given that a serial queue works in a FIFO manner, tasks will always complete in the order in which they were inserted. Clearly the compute() method is the culprit here. Can you imagine clicking a button just to have the UI freeze up on you for that long? Background Threads How can we make this better? DispatchQueue.global() to the rescue! This is where background threads come in. Referring to the GCD architecture diagram above, we can see that anything that is not the Main Thread is a background thread in iOS. They can run alongside the Main Thread, leaving it fully unoccupied and ready to handle other UI events like scrolling, responding to user events, animating etc. Let's make a small change to our button click handler above: class ViewController: UIViewController { @IBAction func handleTap(_ sender: Any) { DispatchQueue.global(qos: .userInitiated).async { [unowned self] in self.compute() } } private func compute() -> Void { // Pretending to post-process a large image. var counter = 0 for _ in 0..<9999999 { counter += 1 } } } Unless specified, a snippet of code will usually default to execute on the Main Queue, so in order to force it to execute on a different thread, we'll wrap our compute call inside of an asynchronous closure that gets submitted to the DispatchQueue.global queue. Keep in mind that we aren't really managing threads here. We're submitting tasks (in the form of closures or blocks) to the desired queue with the assumption that it is guaranteed to execute at some point in time. The queue decides which thread to allocate the task to, and it does all the hard work of assessing system requirements and managing the actual threads. This is the magic of Grand Central Dispatch. As the old adage goes, you can't improve what you can't measure. So we measured our truly terrible button click handler, and now that we've improved it, we'll measure it once again to get some concrete data with regards to performance. Looking at the profiler again, it's quite clear to us that this is a huge improvement. The task takes an identical amount of time, but this time, it's happening in the background without locking up the UI. Even though our app is doing the same amount of work, the perceived performance is much better because the user will be free to do other things while the app is processing. You may have noticed that we accessed a global queue of .userInitiated priority. This is an attribute we can use to give our tasks a sense of urgency. If we run the same task on a global queue of and pass it a qos attribute of background , iOS will think it's a utility task, and thus allocate fewer resources to execute it. So, while we don't have control over when our tasks get executed, we do have control over their priority. A Note on Main Thread vs. Main Queue You might be wondering why the Profiler shows "Main Thread" and why we're referring to it as the "Main Queue". If you refer back to the GCD architecture we described above, the Main Queue is solely responsible for managing the Main Thread. The Dispatch Queues section in the Concurrency Programming Guide says that "the main dispatch queue is a globally available serial queue that executes tasks on the application’s main thread. Because it runs on your application’s main thread, the main queue is often used as a key synchronization point for an application." The terms "execute on the Main Thread" and "execute on the Main Queue" can be used interchangeably. Concurrent Queues So far, our tasks have been executed exclusively in a serial manner. DispatchQueue.main is by default a serial queue, and DispatchQueue.global gives you four concurrent dispatch queues depending on the priority parameter you pass in. Let's say we want to take five images, and have our app process them all in parallel on background threads. How would we go about doing that? We can spin up a custom concurrent queue with an identifier of our choosing, and allocate those tasks there. All that's required is the .concurrent attribute during the construction of the queue. class ViewController: UIViewController { let queue = DispatchQueue(label: "com.app.concurrentQueue", attributes: .concurrent) let images: [UIImage] = [UIImage].init(repeating: UIImage(), count: 5) @IBAction func handleTap(_ sender: Any) { for img in images { queue.async { [unowned self] in self.compute(img) } } } private func compute(_ img: UIImage) -> Void { // Pretending to post-process a large image. var counter = 0 for _ in 0..<9999999 { counter += 1 } } } Running that through the profiler, we can see that the app is now spinning up 5 discrete threads to parallelize a for-loop. Parallelization of N Tasks So far, we've looked at pushing computationally expensive task(s) onto background threads without clogging up the UI thread. But what about executing parallel tasks with some restrictions? How can Spotify download multiple songs in parallel, while limiting the maximum number up to 3? We can go about this in a few ways, but this is a good time to explore another important construct in multithreaded programming: semaphores. Semaphores are signaling mechanisms. They are commonly used to control access to a shared resource. Imagine a scenario where a thread can lock access to a certain section of the code while it executes it, and unlocks after it's done to let other threads execute the said section of the code. You would see this type of behavior in database writes and reads, for example. What if you want only one thread writing to a database and preventing any reads during that time? This is a common concern in thread-safety called Readers-writer lock. Semaphores can be used to control concurrency in our app by allowing us to lock n number of threads. let kMaxConcurrent = 3 // Or 1 if you want strictly ordered downloads! let semaphore = DispatchSemaphore(value: kMaxConcurrent) let downloadQueue = DispatchQueue(label: "com.app.downloadQueue", attributes: .concurrent) class ViewController: UIViewController { @IBAction func handleTap(_ sender: Any) { for i in 0..<15 { downloadQueue.async { [unowned self] in // Lock shared resource access semaphore.wait() // Expensive task self.download(i + 1) // Update the UI on the main thread, always! DispatchQueue.main.async { tableView.reloadData() // Release the lock semaphore.signal() } } } } func download(_ songId: Int) -> Void { var counter = 0 // Simulate semi-random download times. for _ in 0..<Int.random(in: 999999...10000000) { counter += songId } } } Notice how we've effectively restricted our download system to limit itself to k number of downloads. The moment one download finishes (or thread is done executing), it decrements the semaphore, allowing the managing queue to spawn another thread and start downloading another song. You can apply a similar pattern to database transactions when dealing with concurrent reads and writes. Semaphores usually aren't necessary for code like the one in our example, but they become more powerful when you need to enforce synchronous behavior whille consuming an asynchronous API. The above could would work just as well with a custom NSOperationQueue with a maxConcurrentOperationCount, but it's a worthwhile tangent regardless. Finer Control with OperationQueue GCD is great when you want to dispatch one-off tasks or closures into a queue in a 'set-it-and-forget-it' fashion, and it provides a very lightweight way of doing so. But what if we want to create a repeatable, structured, long-running task that produces associated state or data? And what if we want to model this chain of operations such that they can be cancelled, suspended and tracked, while still working with a closure-friendly API? Imagine an operation like this: This would be quite cumbersome to achieve with GCD. We want a more modular way of defining a group of tasks while maintaining readability and also exposing a greater amount of control. In this case, we can use Operation objects and queue them onto an OperationQueue, which is a high-level wrapper around DispatchQueue. Let's look at some of the benefits of using these abstractions and what they offer in comparison to the lower-level GCI API: You may want to create dependencies between tasks, and while you could do this via GCD, you're better off defining them concretely as Operation objects, or units of work, and pushing them onto your own queue. This would allow for maximum reusability since you may use the same pattern elsewhere in an application. The Operation and OperationQueue classes have a number of properties that can be observed, using KVO (Key Value Observing). This is another important benefit if you want to monitor the state of an operation or operation queue. Operations can be paused, resumed, and cancelled. Once you dispatch a task using Grand Central Dispatch, you no longer have control or insight into the execution of that task. The Operation API is more flexible in that respect, giving the developer control over the operation's life cycle. OperationQueue allows you to specify the maximum number of queued operations that can run simultaneously, giving you a finer degree of control over the concurrency aspects. The usage of Operation and OperationQueue could fill an entire blog post, but let's look at a quick example of what modeling dependencies looks like. (GCD can also create dependencies, but you're better off dividing up large tasks into a series of composable sub-tasks.) In order to create a chain of operations that depend on one another, we could do something like this: class ViewController: UIViewController { var queue = OperationQueue() var rawImage = UIImage? = nil let imageUrl = URL(string: "https://example.com/portrait.jpg")! @IBOutlet weak var imageView: UIImageView! let downloadOperation = BlockOperation { let image = Downloader.downloadImageWithURL(url: imageUrl) OperationQueue.main.async { self.rawImage = image } } let filterOperation = BlockOperation { let filteredImage = ImgProcessor.addGaussianBlur(self.rawImage) OperationQueue.main.async { self.imageView = filteredImage } } filterOperation.addDependency(downloadOperation) [downloadOperation, filterOperation].forEach { queue.addOperation($0) } } So why not opt for a higher level abstraction and avoid using GCD entirely? While GCD is ideal for inline asynchronous processing, Operation provides a more comprehensive, object-oriented model of computation for encapsulating all of the data around structured, repeatable tasks in an application. Developers should use the highest level of abstraction possible for any given problem, and for scheduling consistent, repeated work, that abstraction is Operation. Other times, it makes more sense to sprinkle in some GCD for one-off tasks or closures that we want to fire. We can mix both OperationQueue and GCD to get the best of both worlds. The Cost of Concurrency DispatchQueue and friends are meant to make it easier for the application developer to execute code concurrently. However, these technologies do not guarantee improvements to the efficiency or responsiveness in an application. It is up to you to use queues in a manner that is both effective and does not impose an undue burden on other resources. For example, it's totally viable to create 10,000 tasks and submit them to a queue, but doing so would allocate a nontrivial amount of memory and introduce a lot of overhead for the allocation and deallocation of operation blocks. This is the opposite of what you want! It's best to profile your app thoroughly to ensure that concurrency is enhancing your app's performance and not degrading it. We've talked about how concurrency comes at a cost in terms of complexity and allocation of system resources, but introducing concurrency also brings a host of other risks like: Deadlock: A situation where a thread locks a critical portion of the code and can halt the application's run loop entirely. In the context of GCD, you should be very careful when using the dispatchQueue.sync { } calls as you could easily get yourself in situations where two synchronous operations can get stuck waiting for each other. Priority Inversion: A condition where a lower priority task blocks a high priority task from executing, which effectively inverts their priorities. GCD allows for different levels of priority on its background queues, so this is quite easily a possibility. Producer-Consumer Problem: A race condition where one thread is creating a data resource while another thread is accessing it. This is a synchronization problem, and can be solved using locks, semaphores, serial queues, or a barrier dispatch if you're using concurrent queues in GCD. ...and many other sorts of locking and data-race conditions that are hard to debug! Thread safety is of the utmost concern when dealing with concurrency. Parting Thoughts + Further Reading If you've made it this far, I applaud you. Hopefully this article gives you a lay of the land when it comes to multithreading techniques on iOS, and how you can use some of them in your app. We didn't get to cover many of the lower-level constructs like locks, mutexes and how they help us achieve synchronization, nor did we get to dive into concrete examples of how concurrency can hurt your app. We'll save those for another day, but you can dig into some additional reading and videos if you're eager to dive deeper. Building Concurrent User Interfaces on iOS (WWDC 2012) Concurrency and Parallelism: Understanding I/O Apple's Official Concurrency Programming Guide Mutexes and Closure Capture in Swift Locks, Thread Safety, and Swift Advanced NSOperations (WWDC 2015) NSHipster: NSOperation Full Article Code
ios Concurrency & Multithreading in iOS By feedproxy.google.com Published On :: Tue, 25 Feb 2020 08:00:00 -0500 Concurrency is the notion of multiple things happening at the same time. This is generally achieved either via time-slicing, or truly in parallel if multiple CPU cores are available to the host operating system. We've all experienced a lack of concurrency, most likely in the form of an app freezing up when running a heavy task. UI freezes don't necessarily occur due to the absence of concurrency — they could just be symptoms of buggy software — but software that doesn't take advantage of all the computational power at its disposal is going to create these freezes whenever it needs to do something resource-intensive. If you've profiled an app hanging in this way, you'll probably see a report that looks like this: Anything related to file I/O, data processing, or networking usually warrants a background task (unless you have a very compelling excuse to halt the entire program). There aren't many reasons that these tasks should block your user from interacting with the rest of your application. Consider how much better the user experience of your app could be if instead, the profiler reported something like this: Analyzing an image, processing a document or a piece of audio, or writing a sizeable chunk of data to disk are examples of tasks that could benefit greatly from being delegated to background threads. Let's dig into how we can enforce such behavior into our iOS applications. A Brief History In the olden days, the maximum amount of work per CPU cycle that a computer could perform was determined by the clock speed. As processor designs became more compact, heat and physical constraints started becoming limiting factors for higher clock speeds. Consequentially, chip manufacturers started adding additional processor cores on each chip in order to increase total performance. By increasing the number of cores, a single chip could execute more CPU instructions per cycle without increasing its speed, size, or thermal output. There's just one problem... How can we take advantage of these extra cores? Multithreading. Multithreading is an implementation handled by the host operating system to allow the creation and usage of n amount of threads. Its main purpose is to provide simultaneous execution of two or more parts of a program to utilize all available CPU time. Multithreading is a powerful technique to have in a programmer's toolbelt, but it comes with its own set of responsibilities. A common misconception is that multithreading requires a multi-core processor, but this isn't the case — single-core CPUs are perfectly capable of working on many threads, but we'll take a look in a bit as to why threading is a problem in the first place. Before we dive in, let's look at the nuances of what concurrency and parallelism mean using a simple diagram: In the first situation presented above, we observe that tasks can run concurrently, but not in parallel. This is similar to having multiple conversations in a chatroom, and interleaving (context-switching) between them, but never truly conversing with two people at the same time. This is what we call concurrency. It is the illusion of multiple things happening at the same time when in reality, they're switching very quickly. Concurrency is about dealing with lots of things at the same time. Contrast this with the parallelism model, in which both tasks run simultaneously. Both execution models exhibit multithreading, which is the involvement of multiple threads working towards one common goal. Multithreading is a generalized technique for introducing a combination of concurrency and parallelism into your program. The Burden of Threads A modern multitasking operating system like iOS has hundreds of programs (or processes) running at any given moment. However, most of these programs are either system daemons or background processes that have very low memory footprint, so what is really needed is a way for individual applications to make use of the extra cores available. An application (process) can have many threads (sub-processes) operating on shared memory. Our goal is to be able to control these threads and use them to our advantage. Historically, introducing concurrency to an app has required the creation of one or more threads. Threads are low-level constructs that need to be managed manually. A quick skim through Apple's Threaded Programming Guide is all it takes to see how much complexity threaded code adds to a codebase. In addition to building an app, the developer has to: Responsibly create new threads, adjusting that number dynamically as system conditions change Manage them carefully, deallocating them from memory once they have finished executing Leverage synchronization mechanisms like mutexes, locks, and semaphores to orchestrate resource access between threads, adding even more overhead to application code Mitigate risks associated with coding an application that assumes most of the costs associated with creating and maintaining any threads it uses, and not the host OS This is unfortunate, as it adds enormous levels of complexity and risk without any guarantees of improved performance. Grand Central Dispatch iOS takes an asynchronous approach to solving the concurrency problem of managing threads. Asynchronous functions are common in most programming environments, and are often used to initiate tasks that might take a long time, like reading a file from the disk, or downloading a file from the web. When invoked, an asynchronous function executes some work behind the scenes to start a background task, but returns immediately, regardless of how long the original task might takes to actually complete. A core technology that iOS provides for starting tasks asynchronously is Grand Central Dispatch (or GCD for short). GCD abstracts away thread management code and moves it down to the system level, exposing a light API to define tasks and execute them on an appropriate dispatch queue. GCD takes care of all thread management and scheduling, providing a holistic approach to task management and execution, while also providing better efficiency than traditional threads. Let's take a look at the main components of GCD: What've we got here? Let's start from the left: DispatchQueue.main: The main thread, or the UI thread, is backed by a single serial queue. All tasks are executed in succession, so it is guaranteed that the order of execution is preserved. It is crucial that you ensure all UI updates are designated to this queue, and that you never run any blocking tasks on it. We want to ensure that the app's run loop (called CFRunLoop) is never blocked in order to maintain the highest framerate. Subsequently, the main queue has the highest priority, and any tasks pushed onto this queue will get executed immediately. DispatchQueue.global: A set of global concurrent queues, each of which manage their own pool of threads. Depending on the priority of your task, you can specify which specific queue to execute your task on, although you should resort to using default most of the time. Because tasks on these queues are executed concurrently, it doesn't guarantee preservation of the order in which tasks were queued. Notice how we're not dealing with individual threads anymore? We're dealing with queues which manage a pool of threads internally, and you will shortly see why queues are a much more sustainable approach to multhreading. Serial Queues: The Main Thread As an exercise, let's look at a snippet of code below, which gets fired when the user presses a button in the app. The expensive compute function can be anything. Let's pretend it is post-processing an image stored on the device. import UIKit class ViewController: UIViewController { @IBAction func handleTap(_ sender: Any) { compute() } private func compute() -> Void { // Pretending to post-process a large image. var counter = 0 for _ in 0..<9999999 { counter += 1 } } } At first glance, this may look harmless, but if you run this inside of a real app, the UI will freeze completely until the loop is terminated, which will take... a while. We can prove it by profiling this task in Instruments. You can fire up the Time Profiler module of Instruments by going to Xcode > Open Developer Tool > Instruments in Xcode's menu options. Let's look at the Threads module of the profiler and see where the CPU usage is highest. We can see that the Main Thread is clearly at 100% capacity for almost 5 seconds. That's a non-trivial amount of time to block the UI. Looking at the call tree below the chart, we can see that the Main Thread is at 99.9% capacity for 4.43 seconds! Given that a serial queue works in a FIFO manner, tasks will always complete in the order in which they were inserted. Clearly the compute() method is the culprit here. Can you imagine clicking a button just to have the UI freeze up on you for that long? Background Threads How can we make this better? DispatchQueue.global() to the rescue! This is where background threads come in. Referring to the GCD architecture diagram above, we can see that anything that is not the Main Thread is a background thread in iOS. They can run alongside the Main Thread, leaving it fully unoccupied and ready to handle other UI events like scrolling, responding to user events, animating etc. Let's make a small change to our button click handler above: class ViewController: UIViewController { @IBAction func handleTap(_ sender: Any) { DispatchQueue.global(qos: .userInitiated).async { [unowned self] in self.compute() } } private func compute() -> Void { // Pretending to post-process a large image. var counter = 0 for _ in 0..<9999999 { counter += 1 } } } Unless specified, a snippet of code will usually default to execute on the Main Queue, so in order to force it to execute on a different thread, we'll wrap our compute call inside of an asynchronous closure that gets submitted to the DispatchQueue.global queue. Keep in mind that we aren't really managing threads here. We're submitting tasks (in the form of closures or blocks) to the desired queue with the assumption that it is guaranteed to execute at some point in time. The queue decides which thread to allocate the task to, and it does all the hard work of assessing system requirements and managing the actual threads. This is the magic of Grand Central Dispatch. As the old adage goes, you can't improve what you can't measure. So we measured our truly terrible button click handler, and now that we've improved it, we'll measure it once again to get some concrete data with regards to performance. Looking at the profiler again, it's quite clear to us that this is a huge improvement. The task takes an identical amount of time, but this time, it's happening in the background without locking up the UI. Even though our app is doing the same amount of work, the perceived performance is much better because the user will be free to do other things while the app is processing. You may have noticed that we accessed a global queue of .userInitiated priority. This is an attribute we can use to give our tasks a sense of urgency. If we run the same task on a global queue of and pass it a qos attribute of background , iOS will think it's a utility task, and thus allocate fewer resources to execute it. So, while we don't have control over when our tasks get executed, we do have control over their priority. A Note on Main Thread vs. Main Queue You might be wondering why the Profiler shows "Main Thread" and why we're referring to it as the "Main Queue". If you refer back to the GCD architecture we described above, the Main Queue is solely responsible for managing the Main Thread. The Dispatch Queues section in the Concurrency Programming Guide says that "the main dispatch queue is a globally available serial queue that executes tasks on the application’s main thread. Because it runs on your application’s main thread, the main queue is often used as a key synchronization point for an application." The terms "execute on the Main Thread" and "execute on the Main Queue" can be used interchangeably. Concurrent Queues So far, our tasks have been executed exclusively in a serial manner. DispatchQueue.main is by default a serial queue, and DispatchQueue.global gives you four concurrent dispatch queues depending on the priority parameter you pass in. Let's say we want to take five images, and have our app process them all in parallel on background threads. How would we go about doing that? We can spin up a custom concurrent queue with an identifier of our choosing, and allocate those tasks there. All that's required is the .concurrent attribute during the construction of the queue. class ViewController: UIViewController { let queue = DispatchQueue(label: "com.app.concurrentQueue", attributes: .concurrent) let images: [UIImage] = [UIImage].init(repeating: UIImage(), count: 5) @IBAction func handleTap(_ sender: Any) { for img in images { queue.async { [unowned self] in self.compute(img) } } } private func compute(_ img: UIImage) -> Void { // Pretending to post-process a large image. var counter = 0 for _ in 0..<9999999 { counter += 1 } } } Running that through the profiler, we can see that the app is now spinning up 5 discrete threads to parallelize a for-loop. Parallelization of N Tasks So far, we've looked at pushing computationally expensive task(s) onto background threads without clogging up the UI thread. But what about executing parallel tasks with some restrictions? How can Spotify download multiple songs in parallel, while limiting the maximum number up to 3? We can go about this in a few ways, but this is a good time to explore another important construct in multithreaded programming: semaphores. Semaphores are signaling mechanisms. They are commonly used to control access to a shared resource. Imagine a scenario where a thread can lock access to a certain section of the code while it executes it, and unlocks after it's done to let other threads execute the said section of the code. You would see this type of behavior in database writes and reads, for example. What if you want only one thread writing to a database and preventing any reads during that time? This is a common concern in thread-safety called Readers-writer lock. Semaphores can be used to control concurrency in our app by allowing us to lock n number of threads. let kMaxConcurrent = 3 // Or 1 if you want strictly ordered downloads! let semaphore = DispatchSemaphore(value: kMaxConcurrent) let downloadQueue = DispatchQueue(label: "com.app.downloadQueue", attributes: .concurrent) class ViewController: UIViewController { @IBAction func handleTap(_ sender: Any) { for i in 0..<15 { downloadQueue.async { [unowned self] in // Lock shared resource access semaphore.wait() // Expensive task self.download(i + 1) // Update the UI on the main thread, always! DispatchQueue.main.async { tableView.reloadData() // Release the lock semaphore.signal() } } } } func download(_ songId: Int) -> Void { var counter = 0 // Simulate semi-random download times. for _ in 0..<Int.random(in: 999999...10000000) { counter += songId } } } Notice how we've effectively restricted our download system to limit itself to k number of downloads. The moment one download finishes (or thread is done executing), it decrements the semaphore, allowing the managing queue to spawn another thread and start downloading another song. You can apply a similar pattern to database transactions when dealing with concurrent reads and writes. Semaphores usually aren't necessary for code like the one in our example, but they become more powerful when you need to enforce synchronous behavior whille consuming an asynchronous API. The above could would work just as well with a custom NSOperationQueue with a maxConcurrentOperationCount, but it's a worthwhile tangent regardless. Finer Control with OperationQueue GCD is great when you want to dispatch one-off tasks or closures into a queue in a 'set-it-and-forget-it' fashion, and it provides a very lightweight way of doing so. But what if we want to create a repeatable, structured, long-running task that produces associated state or data? And what if we want to model this chain of operations such that they can be cancelled, suspended and tracked, while still working with a closure-friendly API? Imagine an operation like this: This would be quite cumbersome to achieve with GCD. We want a more modular way of defining a group of tasks while maintaining readability and also exposing a greater amount of control. In this case, we can use Operation objects and queue them onto an OperationQueue, which is a high-level wrapper around DispatchQueue. Let's look at some of the benefits of using these abstractions and what they offer in comparison to the lower-level GCI API: You may want to create dependencies between tasks, and while you could do this via GCD, you're better off defining them concretely as Operation objects, or units of work, and pushing them onto your own queue. This would allow for maximum reusability since you may use the same pattern elsewhere in an application. The Operation and OperationQueue classes have a number of properties that can be observed, using KVO (Key Value Observing). This is another important benefit if you want to monitor the state of an operation or operation queue. Operations can be paused, resumed, and cancelled. Once you dispatch a task using Grand Central Dispatch, you no longer have control or insight into the execution of that task. The Operation API is more flexible in that respect, giving the developer control over the operation's life cycle. OperationQueue allows you to specify the maximum number of queued operations that can run simultaneously, giving you a finer degree of control over the concurrency aspects. The usage of Operation and OperationQueue could fill an entire blog post, but let's look at a quick example of what modeling dependencies looks like. (GCD can also create dependencies, but you're better off dividing up large tasks into a series of composable sub-tasks.) In order to create a chain of operations that depend on one another, we could do something like this: class ViewController: UIViewController { var queue = OperationQueue() var rawImage = UIImage? = nil let imageUrl = URL(string: "https://example.com/portrait.jpg")! @IBOutlet weak var imageView: UIImageView! let downloadOperation = BlockOperation { let image = Downloader.downloadImageWithURL(url: imageUrl) OperationQueue.main.async { self.rawImage = image } } let filterOperation = BlockOperation { let filteredImage = ImgProcessor.addGaussianBlur(self.rawImage) OperationQueue.main.async { self.imageView = filteredImage } } filterOperation.addDependency(downloadOperation) [downloadOperation, filterOperation].forEach { queue.addOperation($0) } } So why not opt for a higher level abstraction and avoid using GCD entirely? While GCD is ideal for inline asynchronous processing, Operation provides a more comprehensive, object-oriented model of computation for encapsulating all of the data around structured, repeatable tasks in an application. Developers should use the highest level of abstraction possible for any given problem, and for scheduling consistent, repeated work, that abstraction is Operation. Other times, it makes more sense to sprinkle in some GCD for one-off tasks or closures that we want to fire. We can mix both OperationQueue and GCD to get the best of both worlds. The Cost of Concurrency DispatchQueue and friends are meant to make it easier for the application developer to execute code concurrently. However, these technologies do not guarantee improvements to the efficiency or responsiveness in an application. It is up to you to use queues in a manner that is both effective and does not impose an undue burden on other resources. For example, it's totally viable to create 10,000 tasks and submit them to a queue, but doing so would allocate a nontrivial amount of memory and introduce a lot of overhead for the allocation and deallocation of operation blocks. This is the opposite of what you want! It's best to profile your app thoroughly to ensure that concurrency is enhancing your app's performance and not degrading it. We've talked about how concurrency comes at a cost in terms of complexity and allocation of system resources, but introducing concurrency also brings a host of other risks like: Deadlock: A situation where a thread locks a critical portion of the code and can halt the application's run loop entirely. In the context of GCD, you should be very careful when using the dispatchQueue.sync { } calls as you could easily get yourself in situations where two synchronous operations can get stuck waiting for each other. Priority Inversion: A condition where a lower priority task blocks a high priority task from executing, which effectively inverts their priorities. GCD allows for different levels of priority on its background queues, so this is quite easily a possibility. Producer-Consumer Problem: A race condition where one thread is creating a data resource while another thread is accessing it. This is a synchronization problem, and can be solved using locks, semaphores, serial queues, or a barrier dispatch if you're using concurrent queues in GCD. ...and many other sorts of locking and data-race conditions that are hard to debug! Thread safety is of the utmost concern when dealing with concurrency. Parting Thoughts + Further Reading If you've made it this far, I applaud you. Hopefully this article gives you a lay of the land when it comes to multithreading techniques on iOS, and how you can use some of them in your app. We didn't get to cover many of the lower-level constructs like locks, mutexes and how they help us achieve synchronization, nor did we get to dive into concrete examples of how concurrency can hurt your app. We'll save those for another day, but you can dig into some additional reading and videos if you're eager to dive deeper. Building Concurrent User Interfaces on iOS (WWDC 2012) Concurrency and Parallelism: Understanding I/O Apple's Official Concurrency Programming Guide Mutexes and Closure Capture in Swift Locks, Thread Safety, and Swift Advanced NSOperations (WWDC 2015) NSHipster: NSOperation Full Article Code
ios Concurrency & Multithreading in iOS By feedproxy.google.com Published On :: Tue, 25 Feb 2020 08:00:00 -0500 Concurrency is the notion of multiple things happening at the same time. This is generally achieved either via time-slicing, or truly in parallel if multiple CPU cores are available to the host operating system. We've all experienced a lack of concurrency, most likely in the form of an app freezing up when running a heavy task. UI freezes don't necessarily occur due to the absence of concurrency — they could just be symptoms of buggy software — but software that doesn't take advantage of all the computational power at its disposal is going to create these freezes whenever it needs to do something resource-intensive. If you've profiled an app hanging in this way, you'll probably see a report that looks like this: Anything related to file I/O, data processing, or networking usually warrants a background task (unless you have a very compelling excuse to halt the entire program). There aren't many reasons that these tasks should block your user from interacting with the rest of your application. Consider how much better the user experience of your app could be if instead, the profiler reported something like this: Analyzing an image, processing a document or a piece of audio, or writing a sizeable chunk of data to disk are examples of tasks that could benefit greatly from being delegated to background threads. Let's dig into how we can enforce such behavior into our iOS applications. A Brief History In the olden days, the maximum amount of work per CPU cycle that a computer could perform was determined by the clock speed. As processor designs became more compact, heat and physical constraints started becoming limiting factors for higher clock speeds. Consequentially, chip manufacturers started adding additional processor cores on each chip in order to increase total performance. By increasing the number of cores, a single chip could execute more CPU instructions per cycle without increasing its speed, size, or thermal output. There's just one problem... How can we take advantage of these extra cores? Multithreading. Multithreading is an implementation handled by the host operating system to allow the creation and usage of n amount of threads. Its main purpose is to provide simultaneous execution of two or more parts of a program to utilize all available CPU time. Multithreading is a powerful technique to have in a programmer's toolbelt, but it comes with its own set of responsibilities. A common misconception is that multithreading requires a multi-core processor, but this isn't the case — single-core CPUs are perfectly capable of working on many threads, but we'll take a look in a bit as to why threading is a problem in the first place. Before we dive in, let's look at the nuances of what concurrency and parallelism mean using a simple diagram: In the first situation presented above, we observe that tasks can run concurrently, but not in parallel. This is similar to having multiple conversations in a chatroom, and interleaving (context-switching) between them, but never truly conversing with two people at the same time. This is what we call concurrency. It is the illusion of multiple things happening at the same time when in reality, they're switching very quickly. Concurrency is about dealing with lots of things at the same time. Contrast this with the parallelism model, in which both tasks run simultaneously. Both execution models exhibit multithreading, which is the involvement of multiple threads working towards one common goal. Multithreading is a generalized technique for introducing a combination of concurrency and parallelism into your program. The Burden of Threads A modern multitasking operating system like iOS has hundreds of programs (or processes) running at any given moment. However, most of these programs are either system daemons or background processes that have very low memory footprint, so what is really needed is a way for individual applications to make use of the extra cores available. An application (process) can have many threads (sub-processes) operating on shared memory. Our goal is to be able to control these threads and use them to our advantage. Historically, introducing concurrency to an app has required the creation of one or more threads. Threads are low-level constructs that need to be managed manually. A quick skim through Apple's Threaded Programming Guide is all it takes to see how much complexity threaded code adds to a codebase. In addition to building an app, the developer has to: Responsibly create new threads, adjusting that number dynamically as system conditions change Manage them carefully, deallocating them from memory once they have finished executing Leverage synchronization mechanisms like mutexes, locks, and semaphores to orchestrate resource access between threads, adding even more overhead to application code Mitigate risks associated with coding an application that assumes most of the costs associated with creating and maintaining any threads it uses, and not the host OS This is unfortunate, as it adds enormous levels of complexity and risk without any guarantees of improved performance. Grand Central Dispatch iOS takes an asynchronous approach to solving the concurrency problem of managing threads. Asynchronous functions are common in most programming environments, and are often used to initiate tasks that might take a long time, like reading a file from the disk, or downloading a file from the web. When invoked, an asynchronous function executes some work behind the scenes to start a background task, but returns immediately, regardless of how long the original task might takes to actually complete. A core technology that iOS provides for starting tasks asynchronously is Grand Central Dispatch (or GCD for short). GCD abstracts away thread management code and moves it down to the system level, exposing a light API to define tasks and execute them on an appropriate dispatch queue. GCD takes care of all thread management and scheduling, providing a holistic approach to task management and execution, while also providing better efficiency than traditional threads. Let's take a look at the main components of GCD: What've we got here? Let's start from the left: DispatchQueue.main: The main thread, or the UI thread, is backed by a single serial queue. All tasks are executed in succession, so it is guaranteed that the order of execution is preserved. It is crucial that you ensure all UI updates are designated to this queue, and that you never run any blocking tasks on it. We want to ensure that the app's run loop (called CFRunLoop) is never blocked in order to maintain the highest framerate. Subsequently, the main queue has the highest priority, and any tasks pushed onto this queue will get executed immediately. DispatchQueue.global: A set of global concurrent queues, each of which manage their own pool of threads. Depending on the priority of your task, you can specify which specific queue to execute your task on, although you should resort to using default most of the time. Because tasks on these queues are executed concurrently, it doesn't guarantee preservation of the order in which tasks were queued. Notice how we're not dealing with individual threads anymore? We're dealing with queues which manage a pool of threads internally, and you will shortly see why queues are a much more sustainable approach to multhreading. Serial Queues: The Main Thread As an exercise, let's look at a snippet of code below, which gets fired when the user presses a button in the app. The expensive compute function can be anything. Let's pretend it is post-processing an image stored on the device. import UIKit class ViewController: UIViewController { @IBAction func handleTap(_ sender: Any) { compute() } private func compute() -> Void { // Pretending to post-process a large image. var counter = 0 for _ in 0..<9999999 { counter += 1 } } } At first glance, this may look harmless, but if you run this inside of a real app, the UI will freeze completely until the loop is terminated, which will take... a while. We can prove it by profiling this task in Instruments. You can fire up the Time Profiler module of Instruments by going to Xcode > Open Developer Tool > Instruments in Xcode's menu options. Let's look at the Threads module of the profiler and see where the CPU usage is highest. We can see that the Main Thread is clearly at 100% capacity for almost 5 seconds. That's a non-trivial amount of time to block the UI. Looking at the call tree below the chart, we can see that the Main Thread is at 99.9% capacity for 4.43 seconds! Given that a serial queue works in a FIFO manner, tasks will always complete in the order in which they were inserted. Clearly the compute() method is the culprit here. Can you imagine clicking a button just to have the UI freeze up on you for that long? Background Threads How can we make this better? DispatchQueue.global() to the rescue! This is where background threads come in. Referring to the GCD architecture diagram above, we can see that anything that is not the Main Thread is a background thread in iOS. They can run alongside the Main Thread, leaving it fully unoccupied and ready to handle other UI events like scrolling, responding to user events, animating etc. Let's make a small change to our button click handler above: class ViewController: UIViewController { @IBAction func handleTap(_ sender: Any) { DispatchQueue.global(qos: .userInitiated).async { [unowned self] in self.compute() } } private func compute() -> Void { // Pretending to post-process a large image. var counter = 0 for _ in 0..<9999999 { counter += 1 } } } Unless specified, a snippet of code will usually default to execute on the Main Queue, so in order to force it to execute on a different thread, we'll wrap our compute call inside of an asynchronous closure that gets submitted to the DispatchQueue.global queue. Keep in mind that we aren't really managing threads here. We're submitting tasks (in the form of closures or blocks) to the desired queue with the assumption that it is guaranteed to execute at some point in time. The queue decides which thread to allocate the task to, and it does all the hard work of assessing system requirements and managing the actual threads. This is the magic of Grand Central Dispatch. As the old adage goes, you can't improve what you can't measure. So we measured our truly terrible button click handler, and now that we've improved it, we'll measure it once again to get some concrete data with regards to performance. Looking at the profiler again, it's quite clear to us that this is a huge improvement. The task takes an identical amount of time, but this time, it's happening in the background without locking up the UI. Even though our app is doing the same amount of work, the perceived performance is much better because the user will be free to do other things while the app is processing. You may have noticed that we accessed a global queue of .userInitiated priority. This is an attribute we can use to give our tasks a sense of urgency. If we run the same task on a global queue of and pass it a qos attribute of background , iOS will think it's a utility task, and thus allocate fewer resources to execute it. So, while we don't have control over when our tasks get executed, we do have control over their priority. A Note on Main Thread vs. Main Queue You might be wondering why the Profiler shows "Main Thread" and why we're referring to it as the "Main Queue". If you refer back to the GCD architecture we described above, the Main Queue is solely responsible for managing the Main Thread. The Dispatch Queues section in the Concurrency Programming Guide says that "the main dispatch queue is a globally available serial queue that executes tasks on the application’s main thread. Because it runs on your application’s main thread, the main queue is often used as a key synchronization point for an application." The terms "execute on the Main Thread" and "execute on the Main Queue" can be used interchangeably. Concurrent Queues So far, our tasks have been executed exclusively in a serial manner. DispatchQueue.main is by default a serial queue, and DispatchQueue.global gives you four concurrent dispatch queues depending on the priority parameter you pass in. Let's say we want to take five images, and have our app process them all in parallel on background threads. How would we go about doing that? We can spin up a custom concurrent queue with an identifier of our choosing, and allocate those tasks there. All that's required is the .concurrent attribute during the construction of the queue. class ViewController: UIViewController { let queue = DispatchQueue(label: "com.app.concurrentQueue", attributes: .concurrent) let images: [UIImage] = [UIImage].init(repeating: UIImage(), count: 5) @IBAction func handleTap(_ sender: Any) { for img in images { queue.async { [unowned self] in self.compute(img) } } } private func compute(_ img: UIImage) -> Void { // Pretending to post-process a large image. var counter = 0 for _ in 0..<9999999 { counter += 1 } } } Running that through the profiler, we can see that the app is now spinning up 5 discrete threads to parallelize a for-loop. Parallelization of N Tasks So far, we've looked at pushing computationally expensive task(s) onto background threads without clogging up the UI thread. But what about executing parallel tasks with some restrictions? How can Spotify download multiple songs in parallel, while limiting the maximum number up to 3? We can go about this in a few ways, but this is a good time to explore another important construct in multithreaded programming: semaphores. Semaphores are signaling mechanisms. They are commonly used to control access to a shared resource. Imagine a scenario where a thread can lock access to a certain section of the code while it executes it, and unlocks after it's done to let other threads execute the said section of the code. You would see this type of behavior in database writes and reads, for example. What if you want only one thread writing to a database and preventing any reads during that time? This is a common concern in thread-safety called Readers-writer lock. Semaphores can be used to control concurrency in our app by allowing us to lock n number of threads. let kMaxConcurrent = 3 // Or 1 if you want strictly ordered downloads! let semaphore = DispatchSemaphore(value: kMaxConcurrent) let downloadQueue = DispatchQueue(label: "com.app.downloadQueue", attributes: .concurrent) class ViewController: UIViewController { @IBAction func handleTap(_ sender: Any) { for i in 0..<15 { downloadQueue.async { [unowned self] in // Lock shared resource access semaphore.wait() // Expensive task self.download(i + 1) // Update the UI on the main thread, always! DispatchQueue.main.async { tableView.reloadData() // Release the lock semaphore.signal() } } } } func download(_ songId: Int) -> Void { var counter = 0 // Simulate semi-random download times. for _ in 0..<Int.random(in: 999999...10000000) { counter += songId } } } Notice how we've effectively restricted our download system to limit itself to k number of downloads. The moment one download finishes (or thread is done executing), it decrements the semaphore, allowing the managing queue to spawn another thread and start downloading another song. You can apply a similar pattern to database transactions when dealing with concurrent reads and writes. Semaphores usually aren't necessary for code like the one in our example, but they become more powerful when you need to enforce synchronous behavior whille consuming an asynchronous API. The above could would work just as well with a custom NSOperationQueue with a maxConcurrentOperationCount, but it's a worthwhile tangent regardless. Finer Control with OperationQueue GCD is great when you want to dispatch one-off tasks or closures into a queue in a 'set-it-and-forget-it' fashion, and it provides a very lightweight way of doing so. But what if we want to create a repeatable, structured, long-running task that produces associated state or data? And what if we want to model this chain of operations such that they can be cancelled, suspended and tracked, while still working with a closure-friendly API? Imagine an operation like this: This would be quite cumbersome to achieve with GCD. We want a more modular way of defining a group of tasks while maintaining readability and also exposing a greater amount of control. In this case, we can use Operation objects and queue them onto an OperationQueue, which is a high-level wrapper around DispatchQueue. Let's look at some of the benefits of using these abstractions and what they offer in comparison to the lower-level GCI API: You may want to create dependencies between tasks, and while you could do this via GCD, you're better off defining them concretely as Operation objects, or units of work, and pushing them onto your own queue. This would allow for maximum reusability since you may use the same pattern elsewhere in an application. The Operation and OperationQueue classes have a number of properties that can be observed, using KVO (Key Value Observing). This is another important benefit if you want to monitor the state of an operation or operation queue. Operations can be paused, resumed, and cancelled. Once you dispatch a task using Grand Central Dispatch, you no longer have control or insight into the execution of that task. The Operation API is more flexible in that respect, giving the developer control over the operation's life cycle. OperationQueue allows you to specify the maximum number of queued operations that can run simultaneously, giving you a finer degree of control over the concurrency aspects. The usage of Operation and OperationQueue could fill an entire blog post, but let's look at a quick example of what modeling dependencies looks like. (GCD can also create dependencies, but you're better off dividing up large tasks into a series of composable sub-tasks.) In order to create a chain of operations that depend on one another, we could do something like this: class ViewController: UIViewController { var queue = OperationQueue() var rawImage = UIImage? = nil let imageUrl = URL(string: "https://example.com/portrait.jpg")! @IBOutlet weak var imageView: UIImageView! let downloadOperation = BlockOperation { let image = Downloader.downloadImageWithURL(url: imageUrl) OperationQueue.main.async { self.rawImage = image } } let filterOperation = BlockOperation { let filteredImage = ImgProcessor.addGaussianBlur(self.rawImage) OperationQueue.main.async { self.imageView = filteredImage } } filterOperation.addDependency(downloadOperation) [downloadOperation, filterOperation].forEach { queue.addOperation($0) } } So why not opt for a higher level abstraction and avoid using GCD entirely? While GCD is ideal for inline asynchronous processing, Operation provides a more comprehensive, object-oriented model of computation for encapsulating all of the data around structured, repeatable tasks in an application. Developers should use the highest level of abstraction possible for any given problem, and for scheduling consistent, repeated work, that abstraction is Operation. Other times, it makes more sense to sprinkle in some GCD for one-off tasks or closures that we want to fire. We can mix both OperationQueue and GCD to get the best of both worlds. The Cost of Concurrency DispatchQueue and friends are meant to make it easier for the application developer to execute code concurrently. However, these technologies do not guarantee improvements to the efficiency or responsiveness in an application. It is up to you to use queues in a manner that is both effective and does not impose an undue burden on other resources. For example, it's totally viable to create 10,000 tasks and submit them to a queue, but doing so would allocate a nontrivial amount of memory and introduce a lot of overhead for the allocation and deallocation of operation blocks. This is the opposite of what you want! It's best to profile your app thoroughly to ensure that concurrency is enhancing your app's performance and not degrading it. We've talked about how concurrency comes at a cost in terms of complexity and allocation of system resources, but introducing concurrency also brings a host of other risks like: Deadlock: A situation where a thread locks a critical portion of the code and can halt the application's run loop entirely. In the context of GCD, you should be very careful when using the dispatchQueue.sync { } calls as you could easily get yourself in situations where two synchronous operations can get stuck waiting for each other. Priority Inversion: A condition where a lower priority task blocks a high priority task from executing, which effectively inverts their priorities. GCD allows for different levels of priority on its background queues, so this is quite easily a possibility. Producer-Consumer Problem: A race condition where one thread is creating a data resource while another thread is accessing it. This is a synchronization problem, and can be solved using locks, semaphores, serial queues, or a barrier dispatch if you're using concurrent queues in GCD. ...and many other sorts of locking and data-race conditions that are hard to debug! Thread safety is of the utmost concern when dealing with concurrency. Parting Thoughts + Further Reading If you've made it this far, I applaud you. Hopefully this article gives you a lay of the land when it comes to multithreading techniques on iOS, and how you can use some of them in your app. We didn't get to cover many of the lower-level constructs like locks, mutexes and how they help us achieve synchronization, nor did we get to dive into concrete examples of how concurrency can hurt your app. We'll save those for another day, but you can dig into some additional reading and videos if you're eager to dive deeper. Building Concurrent User Interfaces on iOS (WWDC 2012) Concurrency and Parallelism: Understanding I/O Apple's Official Concurrency Programming Guide Mutexes and Closure Capture in Swift Locks, Thread Safety, and Swift Advanced NSOperations (WWDC 2015) NSHipster: NSOperation Full Article Code
ios Usability task scenarios: The beating heart of a usability test By feedproxy.google.com Published On :: Mon, 2 Dec 2019 07:22:13 GMT Usability tests are unique. We ask people to do real tasks with the system and watch. As the person completes the task, we watch their behaviour and listen to their stream-of-consciousness narrative. But what makes a good usability task scenario? Full Article
ios Concurrency & Multithreading in iOS By feedproxy.google.com Published On :: Tue, 25 Feb 2020 08:00:00 -0500 Concurrency is the notion of multiple things happening at the same time. This is generally achieved either via time-slicing, or truly in parallel if multiple CPU cores are available to the host operating system. We've all experienced a lack of concurrency, most likely in the form of an app freezing up when running a heavy task. UI freezes don't necessarily occur due to the absence of concurrency — they could just be symptoms of buggy software — but software that doesn't take advantage of all the computational power at its disposal is going to create these freezes whenever it needs to do something resource-intensive. If you've profiled an app hanging in this way, you'll probably see a report that looks like this: Anything related to file I/O, data processing, or networking usually warrants a background task (unless you have a very compelling excuse to halt the entire program). There aren't many reasons that these tasks should block your user from interacting with the rest of your application. Consider how much better the user experience of your app could be if instead, the profiler reported something like this: Analyzing an image, processing a document or a piece of audio, or writing a sizeable chunk of data to disk are examples of tasks that could benefit greatly from being delegated to background threads. Let's dig into how we can enforce such behavior into our iOS applications. A Brief History In the olden days, the maximum amount of work per CPU cycle that a computer could perform was determined by the clock speed. As processor designs became more compact, heat and physical constraints started becoming limiting factors for higher clock speeds. Consequentially, chip manufacturers started adding additional processor cores on each chip in order to increase total performance. By increasing the number of cores, a single chip could execute more CPU instructions per cycle without increasing its speed, size, or thermal output. There's just one problem... How can we take advantage of these extra cores? Multithreading. Multithreading is an implementation handled by the host operating system to allow the creation and usage of n amount of threads. Its main purpose is to provide simultaneous execution of two or more parts of a program to utilize all available CPU time. Multithreading is a powerful technique to have in a programmer's toolbelt, but it comes with its own set of responsibilities. A common misconception is that multithreading requires a multi-core processor, but this isn't the case — single-core CPUs are perfectly capable of working on many threads, but we'll take a look in a bit as to why threading is a problem in the first place. Before we dive in, let's look at the nuances of what concurrency and parallelism mean using a simple diagram: In the first situation presented above, we observe that tasks can run concurrently, but not in parallel. This is similar to having multiple conversations in a chatroom, and interleaving (context-switching) between them, but never truly conversing with two people at the same time. This is what we call concurrency. It is the illusion of multiple things happening at the same time when in reality, they're switching very quickly. Concurrency is about dealing with lots of things at the same time. Contrast this with the parallelism model, in which both tasks run simultaneously. Both execution models exhibit multithreading, which is the involvement of multiple threads working towards one common goal. Multithreading is a generalized technique for introducing a combination of concurrency and parallelism into your program. The Burden of Threads A modern multitasking operating system like iOS has hundreds of programs (or processes) running at any given moment. However, most of these programs are either system daemons or background processes that have very low memory footprint, so what is really needed is a way for individual applications to make use of the extra cores available. An application (process) can have many threads (sub-processes) operating on shared memory. Our goal is to be able to control these threads and use them to our advantage. Historically, introducing concurrency to an app has required the creation of one or more threads. Threads are low-level constructs that need to be managed manually. A quick skim through Apple's Threaded Programming Guide is all it takes to see how much complexity threaded code adds to a codebase. In addition to building an app, the developer has to: Responsibly create new threads, adjusting that number dynamically as system conditions change Manage them carefully, deallocating them from memory once they have finished executing Leverage synchronization mechanisms like mutexes, locks, and semaphores to orchestrate resource access between threads, adding even more overhead to application code Mitigate risks associated with coding an application that assumes most of the costs associated with creating and maintaining any threads it uses, and not the host OS This is unfortunate, as it adds enormous levels of complexity and risk without any guarantees of improved performance. Grand Central Dispatch iOS takes an asynchronous approach to solving the concurrency problem of managing threads. Asynchronous functions are common in most programming environments, and are often used to initiate tasks that might take a long time, like reading a file from the disk, or downloading a file from the web. When invoked, an asynchronous function executes some work behind the scenes to start a background task, but returns immediately, regardless of how long the original task might takes to actually complete. A core technology that iOS provides for starting tasks asynchronously is Grand Central Dispatch (or GCD for short). GCD abstracts away thread management code and moves it down to the system level, exposing a light API to define tasks and execute them on an appropriate dispatch queue. GCD takes care of all thread management and scheduling, providing a holistic approach to task management and execution, while also providing better efficiency than traditional threads. Let's take a look at the main components of GCD: What've we got here? Let's start from the left: DispatchQueue.main: The main thread, or the UI thread, is backed by a single serial queue. All tasks are executed in succession, so it is guaranteed that the order of execution is preserved. It is crucial that you ensure all UI updates are designated to this queue, and that you never run any blocking tasks on it. We want to ensure that the app's run loop (called CFRunLoop) is never blocked in order to maintain the highest framerate. Subsequently, the main queue has the highest priority, and any tasks pushed onto this queue will get executed immediately. DispatchQueue.global: A set of global concurrent queues, each of which manage their own pool of threads. Depending on the priority of your task, you can specify which specific queue to execute your task on, although you should resort to using default most of the time. Because tasks on these queues are executed concurrently, it doesn't guarantee preservation of the order in which tasks were queued. Notice how we're not dealing with individual threads anymore? We're dealing with queues which manage a pool of threads internally, and you will shortly see why queues are a much more sustainable approach to multhreading. Serial Queues: The Main Thread As an exercise, let's look at a snippet of code below, which gets fired when the user presses a button in the app. The expensive compute function can be anything. Let's pretend it is post-processing an image stored on the device. import UIKit class ViewController: UIViewController { @IBAction func handleTap(_ sender: Any) { compute() } private func compute() -> Void { // Pretending to post-process a large image. var counter = 0 for _ in 0..<9999999 { counter += 1 } } } At first glance, this may look harmless, but if you run this inside of a real app, the UI will freeze completely until the loop is terminated, which will take... a while. We can prove it by profiling this task in Instruments. You can fire up the Time Profiler module of Instruments by going to Xcode > Open Developer Tool > Instruments in Xcode's menu options. Let's look at the Threads module of the profiler and see where the CPU usage is highest. We can see that the Main Thread is clearly at 100% capacity for almost 5 seconds. That's a non-trivial amount of time to block the UI. Looking at the call tree below the chart, we can see that the Main Thread is at 99.9% capacity for 4.43 seconds! Given that a serial queue works in a FIFO manner, tasks will always complete in the order in which they were inserted. Clearly the compute() method is the culprit here. Can you imagine clicking a button just to have the UI freeze up on you for that long? Background Threads How can we make this better? DispatchQueue.global() to the rescue! This is where background threads come in. Referring to the GCD architecture diagram above, we can see that anything that is not the Main Thread is a background thread in iOS. They can run alongside the Main Thread, leaving it fully unoccupied and ready to handle other UI events like scrolling, responding to user events, animating etc. Let's make a small change to our button click handler above: class ViewController: UIViewController { @IBAction func handleTap(_ sender: Any) { DispatchQueue.global(qos: .userInitiated).async { [unowned self] in self.compute() } } private func compute() -> Void { // Pretending to post-process a large image. var counter = 0 for _ in 0..<9999999 { counter += 1 } } } Unless specified, a snippet of code will usually default to execute on the Main Queue, so in order to force it to execute on a different thread, we'll wrap our compute call inside of an asynchronous closure that gets submitted to the DispatchQueue.global queue. Keep in mind that we aren't really managing threads here. We're submitting tasks (in the form of closures or blocks) to the desired queue with the assumption that it is guaranteed to execute at some point in time. The queue decides which thread to allocate the task to, and it does all the hard work of assessing system requirements and managing the actual threads. This is the magic of Grand Central Dispatch. As the old adage goes, you can't improve what you can't measure. So we measured our truly terrible button click handler, and now that we've improved it, we'll measure it once again to get some concrete data with regards to performance. Looking at the profiler again, it's quite clear to us that this is a huge improvement. The task takes an identical amount of time, but this time, it's happening in the background without locking up the UI. Even though our app is doing the same amount of work, the perceived performance is much better because the user will be free to do other things while the app is processing. You may have noticed that we accessed a global queue of .userInitiated priority. This is an attribute we can use to give our tasks a sense of urgency. If we run the same task on a global queue of and pass it a qos attribute of background , iOS will think it's a utility task, and thus allocate fewer resources to execute it. So, while we don't have control over when our tasks get executed, we do have control over their priority. A Note on Main Thread vs. Main Queue You might be wondering why the Profiler shows "Main Thread" and why we're referring to it as the "Main Queue". If you refer back to the GCD architecture we described above, the Main Queue is solely responsible for managing the Main Thread. The Dispatch Queues section in the Concurrency Programming Guide says that "the main dispatch queue is a globally available serial queue that executes tasks on the application’s main thread. Because it runs on your application’s main thread, the main queue is often used as a key synchronization point for an application." The terms "execute on the Main Thread" and "execute on the Main Queue" can be used interchangeably. Concurrent Queues So far, our tasks have been executed exclusively in a serial manner. DispatchQueue.main is by default a serial queue, and DispatchQueue.global gives you four concurrent dispatch queues depending on the priority parameter you pass in. Let's say we want to take five images, and have our app process them all in parallel on background threads. How would we go about doing that? We can spin up a custom concurrent queue with an identifier of our choosing, and allocate those tasks there. All that's required is the .concurrent attribute during the construction of the queue. class ViewController: UIViewController { let queue = DispatchQueue(label: "com.app.concurrentQueue", attributes: .concurrent) let images: [UIImage] = [UIImage].init(repeating: UIImage(), count: 5) @IBAction func handleTap(_ sender: Any) { for img in images { queue.async { [unowned self] in self.compute(img) } } } private func compute(_ img: UIImage) -> Void { // Pretending to post-process a large image. var counter = 0 for _ in 0..<9999999 { counter += 1 } } } Running that through the profiler, we can see that the app is now spinning up 5 discrete threads to parallelize a for-loop. Parallelization of N Tasks So far, we've looked at pushing computationally expensive task(s) onto background threads without clogging up the UI thread. But what about executing parallel tasks with some restrictions? How can Spotify download multiple songs in parallel, while limiting the maximum number up to 3? We can go about this in a few ways, but this is a good time to explore another important construct in multithreaded programming: semaphores. Semaphores are signaling mechanisms. They are commonly used to control access to a shared resource. Imagine a scenario where a thread can lock access to a certain section of the code while it executes it, and unlocks after it's done to let other threads execute the said section of the code. You would see this type of behavior in database writes and reads, for example. What if you want only one thread writing to a database and preventing any reads during that time? This is a common concern in thread-safety called Readers-writer lock. Semaphores can be used to control concurrency in our app by allowing us to lock n number of threads. let kMaxConcurrent = 3 // Or 1 if you want strictly ordered downloads! let semaphore = DispatchSemaphore(value: kMaxConcurrent) let downloadQueue = DispatchQueue(label: "com.app.downloadQueue", attributes: .concurrent) class ViewController: UIViewController { @IBAction func handleTap(_ sender: Any) { for i in 0..<15 { downloadQueue.async { [unowned self] in // Lock shared resource access semaphore.wait() // Expensive task self.download(i + 1) // Update the UI on the main thread, always! DispatchQueue.main.async { tableView.reloadData() // Release the lock semaphore.signal() } } } } func download(_ songId: Int) -> Void { var counter = 0 // Simulate semi-random download times. for _ in 0..<Int.random(in: 999999...10000000) { counter += songId } } } Notice how we've effectively restricted our download system to limit itself to k number of downloads. The moment one download finishes (or thread is done executing), it decrements the semaphore, allowing the managing queue to spawn another thread and start downloading another song. You can apply a similar pattern to database transactions when dealing with concurrent reads and writes. Semaphores usually aren't necessary for code like the one in our example, but they become more powerful when you need to enforce synchronous behavior whille consuming an asynchronous API. The above could would work just as well with a custom NSOperationQueue with a maxConcurrentOperationCount, but it's a worthwhile tangent regardless. Finer Control with OperationQueue GCD is great when you want to dispatch one-off tasks or closures into a queue in a 'set-it-and-forget-it' fashion, and it provides a very lightweight way of doing so. But what if we want to create a repeatable, structured, long-running task that produces associated state or data? And what if we want to model this chain of operations such that they can be cancelled, suspended and tracked, while still working with a closure-friendly API? Imagine an operation like this: This would be quite cumbersome to achieve with GCD. We want a more modular way of defining a group of tasks while maintaining readability and also exposing a greater amount of control. In this case, we can use Operation objects and queue them onto an OperationQueue, which is a high-level wrapper around DispatchQueue. Let's look at some of the benefits of using these abstractions and what they offer in comparison to the lower-level GCI API: You may want to create dependencies between tasks, and while you could do this via GCD, you're better off defining them concretely as Operation objects, or units of work, and pushing them onto your own queue. This would allow for maximum reusability since you may use the same pattern elsewhere in an application. The Operation and OperationQueue classes have a number of properties that can be observed, using KVO (Key Value Observing). This is another important benefit if you want to monitor the state of an operation or operation queue. Operations can be paused, resumed, and cancelled. Once you dispatch a task using Grand Central Dispatch, you no longer have control or insight into the execution of that task. The Operation API is more flexible in that respect, giving the developer control over the operation's life cycle. OperationQueue allows you to specify the maximum number of queued operations that can run simultaneously, giving you a finer degree of control over the concurrency aspects. The usage of Operation and OperationQueue could fill an entire blog post, but let's look at a quick example of what modeling dependencies looks like. (GCD can also create dependencies, but you're better off dividing up large tasks into a series of composable sub-tasks.) In order to create a chain of operations that depend on one another, we could do something like this: class ViewController: UIViewController { var queue = OperationQueue() var rawImage = UIImage? = nil let imageUrl = URL(string: "https://example.com/portrait.jpg")! @IBOutlet weak var imageView: UIImageView! let downloadOperation = BlockOperation { let image = Downloader.downloadImageWithURL(url: imageUrl) OperationQueue.main.async { self.rawImage = image } } let filterOperation = BlockOperation { let filteredImage = ImgProcessor.addGaussianBlur(self.rawImage) OperationQueue.main.async { self.imageView = filteredImage } } filterOperation.addDependency(downloadOperation) [downloadOperation, filterOperation].forEach { queue.addOperation($0) } } So why not opt for a higher level abstraction and avoid using GCD entirely? While GCD is ideal for inline asynchronous processing, Operation provides a more comprehensive, object-oriented model of computation for encapsulating all of the data around structured, repeatable tasks in an application. Developers should use the highest level of abstraction possible for any given problem, and for scheduling consistent, repeated work, that abstraction is Operation. Other times, it makes more sense to sprinkle in some GCD for one-off tasks or closures that we want to fire. We can mix both OperationQueue and GCD to get the best of both worlds. The Cost of Concurrency DispatchQueue and friends are meant to make it easier for the application developer to execute code concurrently. However, these technologies do not guarantee improvements to the efficiency or responsiveness in an application. It is up to you to use queues in a manner that is both effective and does not impose an undue burden on other resources. For example, it's totally viable to create 10,000 tasks and submit them to a queue, but doing so would allocate a nontrivial amount of memory and introduce a lot of overhead for the allocation and deallocation of operation blocks. This is the opposite of what you want! It's best to profile your app thoroughly to ensure that concurrency is enhancing your app's performance and not degrading it. We've talked about how concurrency comes at a cost in terms of complexity and allocation of system resources, but introducing concurrency also brings a host of other risks like: Deadlock: A situation where a thread locks a critical portion of the code and can halt the application's run loop entirely. In the context of GCD, you should be very careful when using the dispatchQueue.sync { } calls as you could easily get yourself in situations where two synchronous operations can get stuck waiting for each other. Priority Inversion: A condition where a lower priority task blocks a high priority task from executing, which effectively inverts their priorities. GCD allows for different levels of priority on its background queues, so this is quite easily a possibility. Producer-Consumer Problem: A race condition where one thread is creating a data resource while another thread is accessing it. This is a synchronization problem, and can be solved using locks, semaphores, serial queues, or a barrier dispatch if you're using concurrent queues in GCD. ...and many other sorts of locking and data-race conditions that are hard to debug! Thread safety is of the utmost concern when dealing with concurrency. Parting Thoughts + Further Reading If you've made it this far, I applaud you. Hopefully this article gives you a lay of the land when it comes to multithreading techniques on iOS, and how you can use some of them in your app. We didn't get to cover many of the lower-level constructs like locks, mutexes and how they help us achieve synchronization, nor did we get to dive into concrete examples of how concurrency can hurt your app. We'll save those for another day, but you can dig into some additional reading and videos if you're eager to dive deeper. Building Concurrent User Interfaces on iOS (WWDC 2012) Concurrency and Parallelism: Understanding I/O Apple's Official Concurrency Programming Guide Mutexes and Closure Capture in Swift Locks, Thread Safety, and Swift Advanced NSOperations (WWDC 2015) NSHipster: NSOperation Full Article Code
ios Intra-Variable Handwriting Inspection Reinforced with Idiosyncrasy Analysis. (arXiv:1912.12168v2 [cs.CV] UPDATED) By arxiv.org Published On :: In this paper, we work on intra-variable handwriting, where the writing samples of an individual can vary significantly. Such within-writer variation throws a challenge for automatic writer inspection, where the state-of-the-art methods do not perform well. To deal with intra-variability, we analyze the idiosyncrasy in individual handwriting. We identify/verify the writer from highly idiosyncratic text-patches. Such patches are detected using a deep recurrent reinforcement learning-based architecture. An idiosyncratic score is assigned to every patch, which is predicted by employing deep regression analysis. For writer identification, we propose a deep neural architecture, which makes the final decision by the idiosyncratic score-induced weighted average of patch-based decisions. For writer verification, we propose two algorithms for patch-fed deep feature aggregation, which assist in authentication using a triplet network. The experiments were performed on two databases, where we obtained encouraging results. Full Article
ios Melissa Cole delves into new techniques at her Spokane studios By www.inlander.com Published On :: Wed, 08 Apr 2020 18:30:00 -0700 Sometimes when you're fairly well-known, especially for a particular style or product, it's tempting to stick with that style, especially if it's what pays the bills.… Full Article Home
ios Ferrocenyl ligands for homogeneous, enantioselective hydrogenation catalysts By www.freepatentsonline.com Published On :: Tue, 02 Mar 2010 08:00:00 EST Compounds of the formula (I) or (I'), where R1 is a hydrogen atom or C1-C4-alkyl and R'1 is C1-C4-alkyl; X1 and X2 are each, independently of one another, a secondary phosphine group; R2 is hydrogen, R01R02R03Si—, C1-C18.acyl substituted by halogen, hydroxy, C1-C8-alkoxy or R04R05N—, -or R06—X01—C(O)—; R01, R02 and R03 are each, independently of one another, C1-C12-alkyl, unsubstituted or C1-C4-alkyl or C1-C4-alkoxy-substituted C6-C10-aryl or C7-C12-aralkyl; R04 and R05 are each, independently of one another, hydrogen, C1-C12-alkyl, C3-C8-cycloalkyl, C6-C10-aryl or C7-C12-aralkyl, or R04 and R05 together are trimethylene, tetramethylene, pentamethylene or 3-oxapcntylene; R06 is C1-C18-alkyl, unsubstituted or C1-C4-alkyl- or C1-C4-alkoxy-substituted C3-C8-cycloalkyl, C6-C10-aryl or C7-C12-aralkyl; X01 is —O— or —NH—; T is C6-C20-arylene; v is 0 or an integer from 1 to 4; and * denotes a mixture of racemic or enantiomerically pure diastereomers or pure racemic or enantiomerically diastereomers, are excellent chiral ligands for metal complexes as enantioselective catalysts for the hydrogenation of prochiral organic compounds. Full Article
ios Near infrared fluorogen and fluorescent activating proteins for in vivo imaging and live-cell biosensing By www.freepatentsonline.com Published On :: Tue, 05 May 2015 08:00:00 EDT Tissue slices and whole organisms offer substantial challenges to fluorescence imaging. Autofluorescence and absorption via intrinsic chromophores, such as flavins, melanin, and hemoglobins, confound and degrade output from all fluorescent tags. An “optical window,” farther red than most autofluorescence sources and in a region of low hemoglobin and water absorbance, lies between 650 and 900 nm. This valley of relative optical clarity is an attractive target for fluorescence-based studies within tissues, intact organs, and living organisms. Novel fluorescent tags were developed herein, based upon a genetically targeted fluorogen activating protein and cognate fluorogenic dye that yields emission with a peak at 733 nm exclusively when complexed as a “fluoromodule”. This tool improves substantially over previously described far-red/NIR fluorescent proteins in terms of brightness, wavelength, and flexibility by leveraging the flexibility of synthetic chemistry to produce novel chromophores. Full Article
ios Biosensors and bio-measurement systems using the same By www.freepatentsonline.com Published On :: Tue, 26 May 2015 08:00:00 EDT A biosensor is provided. The biosensor is used to sense a biological sample and has a code representing features of the biosensor. The biosensor includes a substrate and a conductive layer. The conductive layer is disposed on a first side of the substrate and includes a first conductive loop and a second conductive loop. The first conductive loop is formed between a first node and a second node and has a first impedance. The second conductive loop is formed between the second node and a third node and has a second impedance. The code is determined according to a comparison result between the second impedance and the first impedance. Full Article
ios Molecular biosensors capable of signal amplification By www.freepatentsonline.com Published On :: Tue, 26 May 2015 08:00:00 EDT The present invention provides molecular biosensors capable of signal amplification, and methods of using the molecular biosensors to detect the presence of a target molecule. Full Article
ios Use and making of biosensors utilizing antimicrobial peptides for highly sensitive biological monitoring By www.freepatentsonline.com Published On :: Tue, 12 May 2015 08:00:00 EDT A biosensor and method of making are disclosed. The biosensor is configured to detect a target and may include a peptide immobilized on a sensing component, the sensing component having an anode and a cathode. The immobilized peptide may comprise an antimicrobial peptide binding motif for the target. The sensing component has an electrical conductivity that changes in response to binding of the immobilized peptide to the target. The immobilized peptide may bind one or more targets selected from the list consisting of: bacteria, Gram-negative bacteria, Gram-positive bacteria, pathogens, protozoa, fungi, viruses, and cancerous cells. The biosensor may have a display with a readout that is responsive to changes in electrical conductivity of the sensing component. The display unit may be wirelessly coupled to the sensing component. A resonant circuit with an inductive coil may be electrically coupled to the sensing component. A planar coil antenna may be disposed in proximity to the resonant circuit, the planar coil antenna being configured to provide power to the sensing component. Full Article
ios Biosensor By www.freepatentsonline.com Published On :: Tue, 19 May 2015 08:00:00 EDT The present application provides apparatus and methods for determining the density of a fluid sample. In particular, it provides a sensor device which can be loaded with a fluid sample such as blood, and which further comprises at least one oscillating beam member or resonator. Exposure of the blood sample to clotting agents allows a clotting reaction to commence. The device allows the density of the sample fluid to be monitored with reference to the oscillation of the vibrating beam member, thus allowing the monitoring of the clotting of the fluid sample. Full Article
ios Biosensors By www.freepatentsonline.com Published On :: Tue, 19 May 2015 08:00:00 EDT A chemiresistive biosensor for detecting an analyte can include a high specific surface area substrate conformally coated with a conductive polymer, and a binding reagent immobilized on the conductive polymer, wherein the binding reagent has a specific affinity for the analyte. The conductive polymer can be deposited on a substrate by oCVD. Full Article
ios Fully automatic self-service key duplicating kiosk By www.freepatentsonline.com Published On :: Tue, 17 Mar 2015 08:00:00 EDT A self-service, fully-automatic kiosk for duplicating keys includes a kiosk housing having a customer interface for receiving payment from a customer for the purchase of at least one duplicate of the customer's key. A key-receiving entry in the housing receives at least a portion of the customer's key to be duplicated, and a key analysis system within the housing analyzes the blade of a key inserted in the key-receiving entry to determine whether the inserted key matches one of a group of preselected key types and, if so, which preselected key type is matched. A key blank magazine within the housing stores key blanks for each of the preselected key types. A key blank extraction system extracts from the magazine a key blank for the preselected key type matched by the blade of the key inserted in the key-receiving entry. Then a key duplicating system within the kiosk replicates the tooth pattern of the blade of the key inserted in the key-receiving entry, on the blade of the extracted key blank. Full Article
ios High-voltage apparatus, and radiation source and radioscopic apparatus having the same By www.freepatentsonline.com Published On :: Tue, 19 May 2015 08:00:00 EDT In a high-voltage apparatus according to this invention, a predetermined voltage is applied to a rotating anode after waiting until the number of rotations increases to such an extent that the rotating anode is not damaged. That is, X-rays of desired intensity are already outputted from a point of time when the voltage is applied to the rotating anode. Therefore, diagnosis can be performed immediately after the voltage is applied to the rotating anode. That is, unlike the prior art, there is no need to wait until X-ray intensity becomes suitable for diagnosis after X-ray emission is started, and there is no need to irradiate the patient with unnecessary X-rays. Therefore, the patient can be inhibited from being irradiated with excessive X-rays (with an improvement made in a response from when the operator gives instructions for starting fluoroscopy until emission of X-rays suitable for diagnosis). Full Article
ios Cruable U-NII wireless radio with secure, integral antenna connection via SM BIOS in U-NII wireless ready device By www.freepatentsonline.com Published On :: Tue, 29 Apr 2008 08:00:00 EDT A method that utilizes software and hardware mechanisms to meet the FCC requirement for a U-NII antenna to be an integral part of the device in which it operates, while providing wireless ready U-NII devices and CRUable U-NII radios. Enhancements are made to the software BIOS, including the inclusion of a table of approved radio-antenna PCI ID pairs to create an authentication scheme that verifies and authenticates the radio and antenna combination as being an FCC-approved unique coupling during boot-up of the system. The BIOS also comprises an OEM field that stores an encrypted secret key utilized to complete a second check of the radio model placed in the device. During boot up of the device, the PCI ID pairs from the BIOS are compared against the PCI ID of the radio and the secret key is checked against the radio model. Only a system with an approved combination of radio and antenna is allowed to complete the boot process, indicating an FCC approved device-antenna-radio combination under the “integral” requirement. Full Article
ios Systems, apparatus, and methods for receiving paging messages by creating fat paths in fast fading scenarios By www.freepatentsonline.com Published On :: Tue, 26 May 2015 08:00:00 EDT This disclosure provides systems, methods, and apparatus for receiving paging messages in fast fading scenarios. In one aspect, a method of demodulating a paging message during an assigned time slot by a wireless communications apparatus operating in an idle mode is provided. The method includes determining, in anticipation of the assigned time slot, an expected time position corresponding to a path of a pilot signal having a greater signal strength relative to other pilot signals. The method further includes assigning a first demodulation element to demodulate the pilot signal with reference to the expected time position and assigning a second demodulation element to demodulate the pilot signal with reference to a time offset from the expected time position. Other aspects, embodiments, and features are also claimed and described. Full Article
ios Dual validator self-service kiosk By www.freepatentsonline.com Published On :: Tue, 24 Mar 2015 08:00:00 EDT Apparatus and methods are provided for a dual validator self-service kiosk (“SSK”). The SSK may include a first validator. The first validator may examine a deposit inserted into the SSK. The SSK may include a second validator. The second validator may examine a tangible item before the SSK dispenses the tangible item. The SSK may retract the tangible item if the tangible item in not collected by a customer. The second validator may examine the tangible item after being retracted by the SSK. The first validator may apply a first examination routine to the deposit. The second validator may apply a second examination routine before the SSK dispenses the tangible item. The second validator may apply a third examination routine to a tangible item retracted by the SSK. Full Article
ios Self-service kiosk validator bridge By www.freepatentsonline.com Published On :: Tue, 26 May 2015 08:00:00 EDT Apparatus and methods for a self-service kiosk (“SSK”) validator bridge are provided. The SSK may include a bridge linking a dispenser and a validator. The bridge may be configured to transfer a tangible item from the validator to the dispenser. The validator may examine the tangible item prior to a dispensing of the tangible item to a SSK customer. The tangible item may be retracted by the dispenser. The bridge may transfer the tangible item from the dispenser to the validator. The SSK may include an acceptor. The bridge may link the acceptor to the validator. The bridge may be configured to transfer a deposit from the acceptor to the validator. The validator may examine the deposit. Full Article
ios Self service kiosk incorporating moisture repellant filter By www.freepatentsonline.com Published On :: Tue, 12 May 2015 08:00:00 EDT A self service kiosk station employing a water repellant air filter is provided. The self service kiosk may take many forms including a vending machine, gaming station, ATM, DVD rental machine, or the like. Positive pressure within the housing may be employed as well to keep contaminants outs and ensure air flow into the housing is through the water repellant air filter. Full Article
ios ANTIGENS ASSOCIATED WITH ENDOMETRIOSIS, PSORIATIC ARTHRITIS AND PSORIASIS By www.freepatentsonline.com Published On :: Thu, 29 Jun 2017 08:00:00 EDT Specific binding members that bind the ED-A isoform of fibronectin for use in methods of diagnosis, detection, imaging and/or treatment of endometriosis, and/or for use in delivery to the neovasculature of endometriotic tissue of a molecule conjugated to the specific binding member. Specific binding members that bind tenascin-C, especially the A1, A2, A3, A4 and/or D domain tenascin-C large isoform, for use in methods of diagnosis, detection, imaging and/or treatment of endometriosis, psoriatic arthritis or psoriasis, and/or for use in delivery to the neovasculature of endometriotic, psoriatic arthritic or psoriatic tissue of a molecule conjugated to the specific binding member. Full Article
ios Compositions and Methods for Improving Rebaudioside M Solubility By www.freepatentsonline.com Published On :: Thu, 22 Jun 2017 08:00:00 EDT Rebaudioside M compositions with improved aqueous solubility and methods for preparing the same are provided herein. The rebaudioside M compositions include (i) disordered crystalline compositions comprising rebaudioside M and rebaudioside D, (ii) spray-dried compositions comprising rebaudioside M, rebaudioside D and steviol glycoside mixtures and/or rebaudioside B and/or NSF-02, (iii) spray-dried compositions comprising rebaudioside M, rebaudioside D and at least one surfactant, polymer, saponin, carbohydrate, polyol, preservative or a combination thereof. Sweetened compositions, such a beverages, containing the rebaudioside M compositions with improved water solubility are also provided herein. Full Article
ios Footcare product dispensing kiosk By www.freepatentsonline.com Published On :: Tue, 26 May 2015 08:00:00 EDT A kiosk apparatus that may select for a person a recommended footcare product based on pressure measurements collected from pressures sensors or calculated biomechanical data estimates. Pressure measurements and calculated biomechanical data estimates may be used to determine if a foot is unshod on the pressure sensor and also group a person into a classified subgroup. The pressure measurement and calculated biomechanical data estimates may also be used to select a recommended footcare product. Full Article
ios Remote Configuration and Operation of Fitness Studios from a Central Server By www.freepatentsonline.com Published On :: Thu, 22 Jun 2017 08:00:00 EDT A method for configuring and operating one or more fitness studios each comprising a plurality of exercise stations at which users perform associated exercise routines, each station having an associated display, the method comprising, for each fitness studio, periodically retrieving, by a server from a database, fitness information for the studio in question for a specified period, from a multi-period fitness library; communicating, by the server to a studio computer, the retrieved fitness information over a communications network; periodically receiving, by the studio computer, the retrieved fitness information; configuring the exercise stations dependent upon the received fitness information; and communicating, by the studio computer to the exercise station displays, dependent upon the received fitness information, station directions to users exercising at the stations for performing an exercise. Full Article
ios METHOD FOR MEASURING TEMPERATURE OF BIOLOGICAL SAMPLE, MEASURING DEVICE, AND BIOSENSOR SYSTEM By www.freepatentsonline.com Published On :: Thu, 22 Jun 2017 08:00:00 EDT The concentration measurement method includes: introducing a predetermined amount of the biological sample into the capillary; measuring a temperature of the biological sample by applying a first voltage to the electrode unit when the temperature of the biological sample is measured, the first voltage allowing the temperature measurement to be less affected by increase and reduction in an amount of the analyte contained in the biological sample; measuring the concentration of the analyte contained in the biological sample by applying a second voltage to the electrode unit; measuring an environmental temperature in a surrounding of the biological sample; and correcting the concentration of the measured analyte based on the measured temperature of the biological sample and the measured environmental temperature. Full Article
ios NICKEL ALLOYS FOR BIOSENSORS By www.freepatentsonline.com Published On :: Thu, 29 Jun 2017 08:00:00 EDT The present disclosure relates to metal alloys for biosensors. An electrode is made from the metal alloy, which more specifically can be a nickel-based alloy. The alloy provides physical and electrical property advantages when compared with existing pure metal electrodes. Full Article
ios POTENTIOSTAT/GALVANOSTAT WITH DIGITAL INTERFACE By www.freepatentsonline.com Published On :: Thu, 29 Jun 2017 08:00:00 EDT A potentiostat/galvanostat employs a controller for providing digital control signals to a digital-to-analog converter (DAC) that generates an analog output signal in response to digital control signals. A high current driver produces a high current output in response to the analog output signal from the DAC. A high current monitor monitors the output from the high current driver to produce a feedback signal for the high current driver to control the current produced by the high current driver and to produce an output dependent on the current supplied from the high current driver for monitoring by the controller. A counter electrode contact for a counter electrode is connected with the output of the high current monitor. A working electrode contact for a working electrode is electrically connected with a fixed stable voltage potential to enable electrochemical analysis of material between the counter electrode and the working electrode. A low current driver produces a low current range output in response to an analog output signal from the DAC. A low current monitor monitors the working electrode contact to detect current at the working electrode contact to supply an output dependent on the current detected for monitoring by the controller and for providing a feedback signal to the low current driver in order to control the output of the low current driver to control current between the counter electrode contact and the working electrode contact. Full Article
ios BIOSYNCHRONOUS TRANSDERMAL DRUG DELIVERY FOR LONGEVITY, ANTI-AGING, FATIGUE MANAGEMENT, OBESITY, WEIGHT LOSS, WEIGHT MANAGEMENT, DELIVERY OF NUTRACEUTICALS, AND THE TREATMENT OF HYPERGLYCEMIA, ALZHEIMER'S DISEASE, SLEEP DISORDERS, PARKINSON'S DISE By www.freepatentsonline.com Published On :: Thu, 29 Jun 2017 08:00:00 EDT Systems and methods for longevity, anti-aging, fatigue management, obesity, weight loss, weight management, delivery of nutraceuticals, and treating hyperglycemia, Alzheimer's disease, sleep disorders, Parkinson's disease, Attention Deficit Disorder and nicotine addiction involve synchronizing and tailoring the administration of nutraceuticals, medications and other substances in accordance with the body's natural circadian rhythms, meal times and other factors. Improved control of blood glucose levels, extended alertness, and weight control, and counteracting of disease symptoms when they are at their worst are possible. An automated, pre-programmable transdermal administration system is used to provide pulsed doses of medications, pharmaceuticals, hormones, neuropeptides, anorexigens, pro-drugs, stimulants, nutraceuticals, phytochemicals, phytonutrients, enzymes, antioxidants, essential oils, fatty acids, minerals, vitamins, amino acids, coenzymes, or other physiological active ingredient or precursor. The system can utilize a pump, pressurized reservoir, a system for removing depleted carrier solution, or other modulated dispensing actuator, in conjunction with porous membranes or micro-fabricated structures. Full Article
ios DETECTION OF BIOAGENTS USING A SHEAR HORIZONTAL SURFACE ACOUSTIC WAVE BIOSENSOR By www.freepatentsonline.com Published On :: Thu, 29 Jun 2017 08:00:00 EDT Viruses and other bioagents are of high medical and biodefense concern and their detection at concentrations well below the threshold necessary to cause health hazards continues to be a challenge with respect to sensitivity, specificity, and selectivity. Ideally, assays for accurate and real time detection of viral agents and other bioagents would not necessitate any pre-processing of the analyte, which would make them applicable for example to bodily fluids (blood, sputum) and man-made as well as naturally occurring bodies of water (pools, rivers). We describe herein a robust biosensor that combines the sensitivity of surface acoustic waves (SAW) generated at a frequency of 325 MHz with the specificity provided by antibodies and other ligands for the detection of viral agents. In preferred embodiments, a lithium tantalate based SAW transducer with silicon dioxide waveguide sensor platform featuring three test and one reference delay lines was used to adsorb antibodies directed against Coxsackie virus B4 or the negative-stranded category A bioagent Sin Nombre virus (SNV), a member of the genus Hantavirus, family Bunyaviridae, negative-stranded RNA viruses. Rapid detection (within seconds) of increasing concentrations of viral particles was linear over a range of order of magnitude for both viruses, although the sensor was approximately 50×104-fold more sensitive for the detection of SNV. For both pathogens, the sensor's selectivity for its target was not compromised by the presence of confounding Herpes Simplex virus type 1. The biosensor was able to detect SNV at doses lower than the load of virus typically found in a human patient suffering from hantavirus cardiopulmonary syndrome (HCPS). Further, in a proof-of-principle real world application, the SAW biosensor was capable of selectively detecting SNV agents in complex solutions, such as naturally occurring bodies of water (river, sewage effluent) without analyte pre-processing. Full Article
ios Beat Sneak Bandit - iOS, iPhone, iPad, iPod By www.dailyecho.co.uk Published On :: Thu, 04 Apr 2013 15:21:24 +0100 Puzzle games can be tricky enough to tackle when you've got security guards, laser beams and pesky trapdoors to negotiate. Full Article
ios Minecraft: Xbox One Edition - 360, Android, iOS, PC, PS3, PS4, XO By www.dailyecho.co.uk Published On :: Mon, 22 Sep 2014 15:54:17 +0100 Minecraft's blocky appearance may at first appear primitive and like a game of yesteryear, but they're in fact the building blocks of which a player is bestowed god-like creative powers, shackled only by the limitations of their own mind. Full Article
ios Flockers - Android, iOS, Mac, PC, PlayStation 4, Xbox One By www.dailyecho.co.uk Published On :: Fri, 16 Jan 2015 13:11:41 +0000 EVER fancied being a shepherd in a steampunk environment, filled with perilous danger which wouldn’t be out of place in a Saw movie? Full Article
ios Tales from the Borderlands: Episode 2 - Atlas Mugged. Android. iOS, PC, PlayStation 3, PlayStation 4, Xbox 360, Xbox One, Vita. By www.dailyecho.co.uk Published On :: Fri, 27 Mar 2015 17:29:26 +0000 IT’S been three months since we were treated to the rip-roaring hilarity-fest which was the opening episode of Tales from the Borderlands. It’s been a long wait. Full Article
ios The Trail: Frontier Challenge - Android, iOS, PC, Switch By www.dailyecho.co.uk Published On :: Sun, 15 Apr 2018 00:57:51 +0100 As a penniless wanderer making landfall in The Trail: Frontier Challenge, you have one thing to do: push the thumbstick forward and hike to the next camp. Full Article
ios Florence - iOS By www.dailyecho.co.uk Published On :: Sun, 15 Apr 2018 12:30:23 +0100 Without spoiling the plot, Florence is about how our relationships entwine with our lives. Full Article
ios Getting Around Indoors with the Clew for iOS By www.applevis.com Published On :: Fri, 01 Feb 2019 21:33:27 -0400 In this podcast, Thomas Domville gives us a demonstration of Clew for iOS. Clew is an AR indoor navigation app designed for visually impaired users to help them retrace their steps in unfamiliar environments. Clew on the App Store: https://itunes.apple.com/us/app/clew/id1268077870?mt=8 Full Article iOS iOS & iPadOS Apps Walk-through