process

Justice Department Announces Antitrust Civil Process Changes for Pendency of COVID-19 Event

The Department of Justice Antitrust Division announced today that it has adopted a series of temporary changes to its civil merger investigation processes, which will remain in place during the pendency of the coronavirus (COVID-19) event. These changes will ensure that the Antitrust Division will be able to continue operations as its employees carry out their duties to protect American consumers under a mass telework directive, in accordance with health guidance from the CDC, WHO, and other health authorities.




process

Justice Department Requires Divestitures as Dean Foods Sells Fluid Milk Processing Plants to DFA out of Bankruptcy

The Department of Justice announced today the conclusion of its investigation into proposed acquisitions by Dairy Farmers of America Inc. (DFA) and Prairie Farms Dairy Inc. (Prairie Farms) of fluid milk processing plants from Dean Foods Company (Dean) out of bankruptcy.  The department’s investigation was conducted against the backdrop of unprecedented challenges in the dairy industry, with the two largest fluid milk processors in the U.S., Dean and Borden Dairy Company, in bankruptcy, and Dean faced with imminent liquidation.




process

“I don’t want ‘crowd peer review’ or whatever you want to call it,” he said. “It’s just too burdensome and I’d rather have a more formal peer review process.”

I understand the above quote completely. Life would be so much simpler if my work was just reviewed by my personal friends and by people whose careers are tied to mine. Sure, they’d point out problems, but they’d do it in a nice way, quietly. They’d understand that any mistakes I made would never have […]




process

National Park Service Begins Process to Revise Backcountry Management Plan for Grand Canyon National Park

National Park Service Begins Process to Revise Backcountry Management Plan for Grand Canyon National Park https://www.nps.gov/grca/learn/news/national-park-service-begins-process-to-revise-backcountry-management-plan-for-grand-canyon-national-park.htm




process

Grand Canyon Permit Processing Updates

Grand Canyon National Park's permit office is temporarily extending the permit processing time periods effective Feb. 14, 2020 until this summer. This change will impact commercial use authorizations (CUA) and special use permit (SUP) applications submitted after Feb. 14. https://www.nps.gov/grca/learn/news/grand-canyon-permit-processing-updates.htm




process

Ecosystem processes related to wood decay

Wood decay elements include snags, down wood, root wads, tree stumps, litter, duff, broomed or diseased branches, and partially dead trees, all of which contribute to ecological processes and biodiversity of the forest ecosystem. Down wood can serve as reservoirs for moisture and mycorrhizal fungi beneficial to the health and growth of commercial tree species. Decaying wood, leaf litter, small twigs, and roots contribute nutrients and structure to humus and soil organic matter, and host microbes that play beneficial roles in nitrogen cycles and other processes. Snags and down wood provide nurse functions for tree and shrub species, and can aid in restoration of degraded forest environments. Various elements of wood decay provide habitat for many species of wildlife including invertebrates, amphibians, reptiles, birds, and mammals. Fire can influence the amounts and distributions of wood decay elements and enhance or detract desired ecosystem processes, depending on severity, charring, soil temperature, and other factors. Managing wood decay elements for ecosystem processes entails better understanding decay dynamics, the role of coarse wood in soil, the role of wood decay in carbon cycling and sequestration, and other considerations.




process

Tomotopigrafie. Modelli visivi per processi di topic modeling dinamico e gerarchico.

In the information overload age, the user needs to find... more




process

Estimating sawmill processing capacity for tongass timber: 2007 and 2008 update

In spring and summer of 2008 and 2009, sawmill production capacity and utilization information was collected from major wood manufacturers in southeast Alaska. The estimated mill capacity in southeast Alaska for calendar year 2007 was 292,350 thousand board feet (mbf) (log scale), and for calendar year 2008 was 282,350 mbf (log scale).




process

Estimating Sawmill Processing Capacity For Tongass Timber: 2003 and 2004 Update

In spring 2004 and 2005, sawmill capacity and wood utilization information was collected for selected mills in southeast Alaska. The collected information is required to prepare information for compliance with Section 705(a) of the Tongass Timber Reform Act. The total capacity in the region (active and inactive mills) was 370,350 thousand board feet (mbf) Scribner log scale during both calendar (CYs) 2003 and 2004. The capacity of active mills for the same periods was 255,350 mbf. This is a 7.4-percent increase in active capacity from CY 2002 (237,850 mbf) to CY 2004. The actual volume of material processed during CY 2004 was 31,027 mbf Scribner log scale. This is a 21.9-percent reduction over CY 2002 (39,702 mbf Scribner log scale).




process

Estimating sawmill processing capacity for Tongass timber: 2005 and 2006 update

In spring 2006 and 2007, sawmill capacity and wood utilization information was collected for selected mills in southeast Alaska. The collected information is required to prepare information for compliance with Section 705(a) of the Tongass Timber Reform Act. The total estimated design capacity in the region (active and inactive mills) was 289,850 thousand board feet (mbf) Scribner log scale in calendar year (CY) 2005 and 284,350 mbf in CY 2006. The estimated design capacity of active mills was 259,850 mbf for CY 2005 and 247,850 mbf for CY 2006. This is a 2.9-percent decrease in active design capacity from CY 2004 (255,350 mbf) to CY 2006. The estimated volume of material processed during CY 2006 was 32,141 mbf Scribner log scale. This is a 3.6-percent increase over CY 2004 (31,027 mbf Scribner log scale).




process

Nontimber forest products in the United States: Montreal Process indicators as measures of current conditions and sustainability.

The United States, in partnership with 11 other countries, participates in the Montreal Process. Each country assesses national progress toward the sustainable management of forest resources by using a set of criteria and indicators agreed on by all member countries. Several indicators focus on nontimber forest products (NTFPs). In the United States, permit and contract data from the U.S. Forest Service and the Bureau of Land Management, in addition to several other data sources, were used as a benchmark to assess harvest, value, employment, exports and imports, per capita consumption, and subsistence uses for many NTFPs. The retail value of commercial harvests of NTFPs from U.S. forest lands is estimated at $1.4 billion annually. Nontimber forest products in the United States are important to many people throughout the country for personal, cultural, and commercial uses, providing food security, beauty, connection to culture and tradition, and income.




process

Estimating sawmill processing capacity for Tongass timber: 2009 and 2010

In spring and summer of 2010 and 2011, sawmill production capacity and wood utilization information was collected from major wood manufacturers in southeast Alaska. The estimated mill capacity in southeast Alaska for calendar year (CY) 2009 was 249,350 thousand board feet (mbf) (log scale), and for CY 2010 was 155,850 mbf (log scale), including idle sawmills. Mill consumption in CY 2009 was estimated at 13,422 mbf (log scale), and for CY 2010 was 15,807 mbf (log scale). Wood products manufacturing employment in southeast Alaska increased from 57.5 full-time equivalent positions in 2009 to 63.5 in 2010 despite the loss of 23,500 mbf of capacity in two sawmills owing to fires, the decommissioning of one large sawmill (65,000 mbf), and equipment sales at two small mills (5,000 mbf).




process

Stunning Photos Of The Installation Process For 5G Network Equipment On The Mount Everest

AsiaWire China Mobile Hong Kong and Huawei have jointly taken 5G connectivity to the highest-altitude base station to the north...




process

How Robotic Process Automation (RPA) Can Help Improve Productivity In The Workplace?

Recent advances in technology have helped both small and large companies to automate their business process to improve productivity. In fact, experts have also emphasized that productivity has stalled over the last couple of years. Numerous large-scale businesses also complained that their productivity was in decline despite implementing innovative workplace guidelines to improve the workflow. […] More




process

And while we’re in the process of missing European...



And while we’re in the process of missing European architecture… ????

4 more days left to catch my Lightroom presets for 50% off! ⌛️ (at Copenhagen, Denmark)




process

Dynamic Range Processing in Audio Post Production

If listeners find themselves using the volume up and down buttons a lot, level differences within your podcast or audio file are too big.
In this article, we are discussing why audio dynamic range processing (or leveling) is more important than loudness normalization, why it depends on factors like the listening environment and the individual character of the content, and why the loudness range descriptor (LRA) is only reliable for speech programs.

Photo by Alexey Ruban.

Why loudness normalization is not enough

Everybody who has lived in an apartment building knows the problem: you want to enjoy a movie late at night, but you're constantly on the edge - not only because of the thrilling story, but because your index finger is hovering over the volume down button of your remote. The next loud sound effect is going to come sooner rather than later, and you want to avoid waking up your neighbors with some gunshot sounds blasting from your TV.

In our previous post, we talked about the overall loudness of a production. While that's certainly important to keep in mind, the loudness target is only an average value, ignoring how much the loudness varies within a production. The loudness target of your movie might be in the ideal range, yet the level differences between a gunshot and someone whispering can still be enormous - having you turn the volume down for the former and up for the latter.

While the average loudness might be perfect, level differences can lead to an unpleasant listening experience.

Of course, this doesn't apply to movies alone. The image above shows a podcast or radio production. The loud section is music, the very quiet section just breathing, and the remaining sections are different voices.

To be clear, we're not saying that the above example is problematic per se. There are many situations, where a big difference in levels - a high dynamic range - is justified: for instance, in a movie theater, optimized for listening and without any outside noise, or in classical music.
Also, if the dynamic range is too small, listening can be tiring.

But if you watch the same movie in an outdoor screening in the summer on a beach next to the crashing waves or in the middle of a noisy city, it can be tricky to hear the softer parts.
Spoken word usually has a smaller dynamic range, and if you produce your podcast for a target audience of train or car commuters, the dynamic range should be even smaller, adjusting for the listening situation.

Therefore, hitting the loudness target has less impact on the listening experience than level differences (dynamic range) within one file!
What makes a suitable dynamic range does not only depend on the listening environment, but also on the nature of the content itself. If the dynamic range is too small, the audio can be tiring to listen to, whereas more variability in levels can make a program more interesting, but might not work in all environments, such as a noisy car.

Dynamic range experiment in a car

Wolfgang Rein, audio technician at SWR, a public broadcaster in Germany, did an experiment to test how drivers react to programs with different dynamic ranges. They monitored to what level drivers set the car stereo depending on speed (thus noise level) and audio dynamic range.
While the results are preliminary, it seems like drivers set the volume as low as possible so that they can still understand the content, but don't get distracted by loud sounds.

As drivers adjust the volume to the loudest voice in a program, they won't understand quieter speakers in content with a high dynamic range anymore. To some degree and for short periods of time, they can compensate by focusing more on the radio program, but over time that's tiring. Therefore, if the loudness varies too much, drivers tend to switch to another program rather than adjusting the volume.
Similar results have been found in a study conducted by NPR Labs and Towson University.

On the other hand, the perception was different in pure music programs. When drivers set the volume according to louder parts, they weren't able to hear softer segments or the beginning of a song very well. But that did not matter to them as much and didn't make them want to turn up the volume or switch the program.

Listener's reaction in response to frequent loudness changes. (from John Kean, Eli Johnson, Dr. Ellyn Sheffield: Study of Audio Loudness Range for Consumers in Various Listening Modes and Ambient Noise Levels)

Loudness comfort zone

The reaction of drivers to variable loudness hints at something that BBC sound engineer Mike Thornton calls the loudness comfort zone.

Tests (...) have shown that if the short-term loudness stays within the "comfort zone" then the consumer doesn’t feel the need to reach for the remote control to adjust the volume.
In a blog post, he highlights how the series Blue Planet 2 and Planet Earth 2 might not always have been the easiest to listen to. The graph below shows an excerpt with very loud music, followed by commentary just at the bottom of the green comfort zone. Thornton writes: "with the volume set at a level that was comfortable when the music was playing we couldn’t always hear the excellent commentary from Sir David Attenborough and had to resort to turning on the subtitles to be sure we knew what Sir David was saying!"

Planet Earth 2 Loudness Plot Excerpt. Colored green: comfort zone of +3 to -5LU around the loudness target. (from Mike Thornton: BBC Blue Planet 2 Latest Show In Firing Line For Sound Issues - Are They Right?)

As already mentioned above, a good mix considers the maximum and minimum possible loudness in the target listening environment.
In a movie theater the loudness comfort zone is big (loudness can vary a lot), and loud music is part of the fun, while quiet scenes work just as well. The opposite was true in the aforementioned experiment with drivers, where the loudness comfort zone is much smaller and quiet voices are difficult to understand.

Hence, the loudness comfort zone determines how much dynamic range an audio signal can use in a specific listening environment.

How to measure dynamic range: LRA

When producing audio for various environments, it would be great to have a target value for dynamic range, (the difference between the smallest and largest signal values of an audio signal) as well. Then you could just set a dynamic range target, similarly to a loudness target.

Theoretically, the maximum possible dynamic range of a production is defined by the bit-depth of the audio format. A 16-bit recording can have a dynamic range of 96 dB; for 24-bit, it's 144 dB - which is well above the approx. 120 dB the human ear can handle. However, most of those bits are typically being used to get to a reasonable base volume. Picture a glass of water: you want it to be almost full, with some headroom so that it doesn't spill when there's a sudden movement, i.e. a bigger amplitude wave at the top.

Determining the dynamic range of a production is easier said than done, though. It depends on which signals are included in the measurement: for example, if something like background music or breathing should be considered at all.
The currently preferred method for broadcasting is called Loudness Range, LRA. It is measured in Loudness Units (LU), and takes into account everything between the 10th and the 95th percentile of a loudness distribution, after an additional gating method. In other words, the loudest 5% and quietest 10% of the audio signal are being ignored. This way, quiet breathing or an occasional loud sound effect won't affect the measurement.

Loudness distribution and LRA for the film 'The Matrix'. Figure from EBU Tech Doc 3343 (p.13).

However, the main difficulty is which signals should be included in the loudness range measurement and which ones should be gated. This is unfortunately often very subjective and difficult to define with a purely statistical method like LRA.

Where LRA falls short

Therefore, only pure speech programs give reliable LRA values that are comparable!
For instance, a typical LRA for news programs is 3 LU; for talks and discussions 5 LU is common. LRA values for features, radio dramas, movies or music very much depend on the individual character and might be in the range between 5 and 25 LU.

To further illustrate this, here are some typical LRA values, according to a paper by Thomas Lund (table 2):

ProgramLoudness Range
Matrix, full movie25.0
NBC Interstitials, Jan. 2008, all together (3:30)9.4
Friends Episode 166.6
Speak Ref., Male, German, SQUAM Trk 546.2
Speak Ref., Female, French, SQUAM Trk 514.8
Speak Ref., Male, English, Sound Check3.3
Wish You Were Here, Pink Floyd22.1
Gilgamesh, Battle of Titans, Osaka Symph.19.7
Don’t Cry For Me Arg., Sinead O’Conner13.7
Beethoven Son in F, Op17, Kliegel & Tichman12.0
Rock’n Roll Train, AC/DC6.0
I.G.Y., Donald Fagen3.6

LRA values of music are very unpredictable as well.
For instance, Tom Frampton measured the LRA of songs in multiple genres, and the differences within each genre are quite big. The ten pop songs that he analyzed varied in LRA between 3.7 and 12 LU, country songs between 3.6 and 14.9 LU. In the Electronic genre the individual LRAs were between 3.7 and 15.2 LU. Please see the tables at the bottom of his blog post for more details.

We at Auphonic also tried to base our Adaptive Leveler parameters on the LRA descriptor. Although it worked, it turned out that it is very difficult to set a loudness range target for diverse audio content, which does include speech, background sounds, music parts, etc. The results were not predictable and it was hard to find good target values. Therefore we developed our own algorithm to measure the dynamic range of audio signals.

In conclusion, LRA comparisons are only useful for productions with spoken word only and the LRA value is therefore not applicable as a general dynamic range target value. The more complex a production gets, the more difficult it is to make any judgment based on the LRA.
This is, because the definition of LRA is purely statistical. There's no smart measurement using classifiers that distinguish between music, speech, quiet breathing, background noises and other types of audio. One would need a more intelligent algorithm (as we use in our Adaptive Leveler), that knows which audio segments should be included and excluded from the measurement.

From theory to application: tools

Loudness and dynamic range clearly is a complicated topic. Luckily, there are tools that can help. To keep short-term loudness in range, a compressor can help control sudden changes in loudness - such as p-pops or consonants like t or k. To achieve a good mid-term loudness, i.e. a signal that doesn't go outside the comfort zone too much, a leveler is a good option. Or, just use a fader or manually adjust volume curves. And to make sure that separate productions sound consistent, loudness normalization is the way to go. We have covered all of this in-depth before.

Looking at the audio from above again, with an adaptive leveler applied it looks like this:

Leveler example. Output at the top, input with leveler envelope at the bottom.

Now, the voices are evened out and the music is at a comfortable level, while the breathing has not been touched at all.
We recently extended Auphonic's adaptive leveler, so that it is now possible to customize the dynamic range - please see adaptive leveler customization and advanced multitrack audio algorithms.
If you wanted to increase the loudness comfort zone (or dynamic range) of the standard preset by 10 dB (or LU), for example, the envelope would look like this:

Leveler with higher dynamic range, only touching sections with extremely low or extremely high loudness to fit into a specific loudness comfort zone.

When a production is done, our adaptive leveler uses classifiers to also calculate the integrated loudness and loudness range of dialog and music sections separately. This way it is possible to just compare the dialog LRA and loudness of complex productions.

Assessing the LRA and loudness of dialog and music separately.

Conclusion

Getting audio dynamics right is not easy. Yet, it is an important thing to keep in mind, because focusing on loudness normalization alone is not enough. In fact, hitting the loudness target often has less impact on the listening experience than level differences, i.e. audio dynamics.

If the dynamic range is too small, the audio can be tiring to listen to, whereas a bigger dynamic range can make a program more interesting, but might not work in loud environments, such as a noisy train.
Therefore, a good mix adapts the audio dynamic range according to the target listening environment (different loudness comfort zones in cinema, at home, in a car) and according to the nature of the content (radio feature, movie, podcast, music, etc.).

Furthermore, because the definition of the loudness range / LRA is purely statistical, only speech programs give reliable LRA values that are comparable.
More "intelligent" algorithms are in development, which use classifiers to decide which signals should be included and excluded from the dynamic range measurement.

If you understand German, take a look at our presentation about audio dynamic processing in podcasts for further information:








process

✚ Tornado Lines – Useful or Not? (The Process 088)

It looks like a tornado. It's messy. It's circular. It almost looks intentionally confusing. But how bad is it really?

Tags: ,




process

12 symptoms of a back-to-front design process

Everyday consumer products continue to frustrate people. The failure of companies to fully embrace UX is partly to blame, but there is also another reason -- one that is seldom discussed. Consumer product companies pay too much heed to their retail customers and, in so doing, they prevent the development team from getting first-hand knowledge of end users.




process

Multitype branching process with nonhomogeneous Poisson and generalized Polya immigration. (arXiv:1909.03684v2 [math.PR] UPDATED)

In a multitype branching process, it is assumed that immigrants arrive according to a nonhomogeneous Poisson or a generalized Polya process (both processes are formulated as a nonhomogeneous birth process with an appropriate choice of transition intensities). We show that the renormalized numbers of objects of the various types alive at time $t$ for supercritical, critical, and subcritical cases jointly converge in distribution under those two different arrival processes. Furthermore, some transient moment analysis when there are only two types of particles is provided. AMS 2000 subject classifications: Primary 60J80, 60J85; secondary 60K10, 60K25, 90B15.




process

Infinite dimensional affine processes. (arXiv:1907.10337v3 [math.PR] UPDATED)

The goal of this article is to investigate infinite dimensional affine diffusion processes on the canonical state space. This includes a derivation of the corresponding system of Riccati differential equations and an existence proof for such processes, which has been missing in the literature so far. For the existence proof, we will regard affine processes as solutions to infinite dimensional stochastic differential equations with values in Hilbert spaces. This requires a suitable version of the Yamada-Watanabe theorem, which we will provide in this paper. Several examples of infinite dimensional affine processes accompany our results.




process

Functional convex order for the scaled McKean-Vlasov processes. (arXiv:2005.03154v1 [math.PR])

We establish the functional convex order results for two scaled McKean-Vlasov processes $X=(X_{t})_{tin[0, T]}$ and $Y=(Y_{t})_{tin[0, T]}$ defined by

[egin{cases} dX_{t}=(alpha X_{t}+eta)dt+sigma(t, X_{t}, mu_{t})dB_{t}, quad X_{0}in L^{p}(mathbb{P}),\ dY_{t}=(alpha Y_{t},+eta)dt+ heta(t, Y_{t}, u_{t})dB_{t}, quad Y_{0}in L^{p}(mathbb{P}). end{cases}] If we make the convexity and monotony assumption (only) on $sigma$ and if $sigmaleq heta$ with respect to the partial matrix order, the convex order for the initial random variable $X_0 leq Y_0$ can be propagated to the whole path of process $X$ and $Y$. That is, if we consider a convex functional $F$ with polynomial growth defined on the path space, we have $mathbb{E}F(X)leqmathbb{E}F(Y)$; for a convex functional $G$ defined on the product space involving the path space and its marginal distribution space, we have $mathbb{E},Gig(X, (mu_t)_{tin[0, T]}ig)leq mathbb{E},Gig(Y, ( u_t)_{tin[0, T]}ig)$ under appropriate conditions. The symmetric setting is also valid, that is, if $ heta leq sigma$ and $Y_0 leq X_0$ with respect to the convex order, then $mathbb{E},F(Y) leq mathbb{E},F(X)$ and $mathbb{E},Gig(Y, ( u_t)_{tin[0, T]}ig)leq mathbb{E},G(X, (mu_t)_{tin[0, T]})$. The proof is based on several forward and backward dynamic programming and the convergence of the Euler scheme of the McKean-Vlasov equation.




process

Box Covers and Domain Orderings for Beyond Worst-Case Join Processing. (arXiv:1909.12102v2 [cs.DB] UPDATED)

Recent beyond worst-case optimal join algorithms Minesweeper and its generalization Tetris have brought the theory of indexing and join processing together by developing a geometric framework for joins. These algorithms take as input an index $mathcal{B}$, referred to as a box cover, that stores output gaps that can be inferred from traditional indexes, such as B+ trees or tries, on the input relations. The performances of these algorithms highly depend on the certificate of $mathcal{B}$, which is the smallest subset of gaps in $mathcal{B}$ whose union covers all of the gaps in the output space of a query $Q$. We study how to generate box covers that contain small size certificates to guarantee efficient runtimes for these algorithms. First, given a query $Q$ over a set of relations of size $N$ and a fixed set of domain orderings for the attributes, we give a $ ilde{O}(N)$-time algorithm called GAMB which generates a box cover for $Q$ that is guaranteed to contain the smallest size certificate across any box cover for $Q$. Second, we show that finding a domain ordering to minimize the box cover size and certificate is NP-hard through a reduction from the 2 consecutive block minimization problem on boolean matrices. Our third contribution is a $ ilde{O}(N)$-time approximation algorithm called ADORA to compute domain orderings, under which one can compute a box cover of size $ ilde{O}(K^r)$, where $K$ is the minimum box cover for $Q$ under any domain ordering and $r$ is the maximum arity of any relation. This guarantees certificates of size $ ilde{O}(K^r)$. We combine ADORA and GAMB with Tetris to form a new algorithm we call TetrisReordered, which provides several new beyond worst-case bounds. On infinite families of queries, TetrisReordered's runtimes are unboundedly better than the bounds stated in prior work.




process

The Zhou Ordinal of Labelled Markov Processes over Separable Spaces. (arXiv:2005.03630v1 [cs.LO])

There exist two notions of equivalence of behavior between states of a Labelled Markov Process (LMP): state bisimilarity and event bisimilarity. The first one can be considered as an appropriate generalization to continuous spaces of Larsen and Skou's probabilistic bisimilarity, while the second one is characterized by a natural logic. C. Zhou expressed state bisimilarity as the greatest fixed point of an operator $mathcal{O}$, and thus introduced an ordinal measure of the discrepancy between it and event bisimilarity. We call this ordinal the "Zhou ordinal" of $mathbb{S}$, $mathfrak{Z}(mathbb{S})$. When $mathfrak{Z}(mathbb{S})=0$, $mathbb{S}$ satisfies the Hennessy-Milner property. The second author proved the existence of an LMP $mathbb{S}$ with $mathfrak{Z}(mathbb{S}) geq 1$ and Zhou showed that there are LMPs having an infinite Zhou ordinal. In this paper we show that there are LMPs $mathbb{S}$ over separable metrizable spaces having arbitrary large countable $mathfrak{Z}(mathbb{S})$ and that it is consistent with the axioms of $mathit{ZFC}$ that there is such a process with an uncountable Zhou ordinal.




process

Scheduling with a processing time oracle. (arXiv:2005.03394v1 [cs.DS])

In this paper we study a single machine scheduling problem on a set of independent jobs whose execution time is not known, but guaranteed to be either short or long, for two given processing times. At every time step, the scheduler has the possibility either to test a job, by querying a processing time oracle, which reveals its processing time, and occupies one time unit on the schedule. Or the scheduler can execute a job, might it be previously tested or not. The objective value is the total completion time over all jobs, and is compared with the objective value of an optimal schedule, which does not need to test. The resulting competitive ratio measures the price of hidden processing time.

Two models are studied in this paper. In the non-adaptive model, the algorithm needs to decide before hand which jobs to test, and which jobs to execute untested. However in the adaptive model, the algorithm can make these decisions adaptively to the outcomes of the job tests. In both models we provide optimal polynomial time two-phase algorithms, which consist of a first phase where jobs are tested, and a second phase where jobs are executed untested. Experiments give strong evidence that optimal algorithms have this structure. Proving this property is left as an open problem.




process

Probabilistic Hyperproperties of Markov Decision Processes. (arXiv:2005.03362v1 [cs.LO])

We study the specification and verification of hyperproperties for probabilistic systems represented as Markov decision processes (MDPs). Hyperproperties are system properties that describe the correctness of a system as a relation between multiple executions. Hyperproperties generalize trace properties and include information-flow security requirements, like noninterference, as well as requirements like symmetry, partial observation, robustness, and fault tolerance. We introduce the temporal logic PHL, which extends classic probabilistic logics with quantification over schedulers and traces. PHL can express a wide range of hyperproperties for probabilistic systems, including both classical applications, such as differential privacy, and novel applications in areas such as robotics and planning. While the model checking problem for PHL is in general undecidable, we provide methods both for proving and for refuting a class of probabilistic hyperproperties for MDPs.




process

Enhancing Software Development Process Using Automated Adaptation of Object Ensembles. (arXiv:2005.03241v1 [cs.SE])

Software development has been changing rapidly. This development process can be influenced through changing developer friendly approaches. We can save time consumption and accelerate the development process if we can automatically guide programmer during software development. There are some approaches that recommended relevant code snippets and APIitems to the developer. Some approaches apply general code, searching techniques and some approaches use an online based repository mining strategies. But it gets quite difficult to help programmers when they need particular type conversion problems. More specifically when they want to adapt existing interfaces according to their expectation. One of the familiar triumph to guide developers in such situation is adapting collections and arrays through automated adaptation of object ensembles. But how does it help to a novice developer in real time software development that is not explicitly specified? In this paper, we have developed a system that works as a plugin-tool integrated with a particular Data Mining Integrated environment (DMIE) to recommend relevant interface while they seek for a type conversion situation. We have a mined repository of respective adapter classes and related APIs from where developer, search their query and get their result using the relevant transformer classes. The system that recommends developers titled automated objective ensembles (AOE plugin).From the investigation as we have ever made, we can see that our approach much better than some of the existing approaches.




process

Determinantal Point Processes in Randomized Numerical Linear Algebra. (arXiv:2005.03185v1 [cs.DS])

Randomized Numerical Linear Algebra (RandNLA) uses randomness to develop improved algorithms for matrix problems that arise in scientific computing, data science, machine learning, etc. Determinantal Point Processes (DPPs), a seemingly unrelated topic in pure and applied mathematics, is a class of stochastic point processes with probability distribution characterized by sub-determinants of a kernel matrix. Recent work has uncovered deep and fruitful connections between DPPs and RandNLA which lead to new guarantees and improved algorithms that are of interest to both areas. We provide an overview of this exciting new line of research, including brief introductions to RandNLA and DPPs, as well as applications of DPPs to classical linear algebra tasks such as least squares regression, low-rank approximation and the Nystr"om method. For example, random sampling with a DPP leads to new kinds of unbiased estimators for least squares, enabling more refined statistical and inferential understanding of these algorithms; a DPP is, in some sense, an optimal randomized algorithm for the Nystr"om method; and a RandNLA technique called leverage score sampling can be derived as the marginal distribution of a DPP. We also discuss recent algorithmic developments, illustrating that, while not quite as efficient as standard RandNLA techniques, DPP-based algorithms are only moderately more expensive.




process

On Optimal Control of Discounted Cost Infinite-Horizon Markov Decision Processes Under Local State Information Structures. (arXiv:2005.03169v1 [eess.SY])

This paper investigates a class of optimal control problems associated with Markov processes with local state information. The decision-maker has only local access to a subset of a state vector information as often encountered in decentralized control problems in multi-agent systems. Under this information structure, part of the state vector cannot be observed. We leverage ab initio principles and find a new form of Bellman equations to characterize the optimal policies of the control problem under local information structures. The dynamic programming solutions feature a mixture of dynamics associated unobservable state components and the local state-feedback policy based on the observable local information. We further characterize the optimal local-state feedback policy using linear programming methods. To reduce the computational complexity of the optimal policy, we propose an approximate algorithm based on virtual beliefs to find a sub-optimal policy. We show the performance bounds on the sub-optimal solution and corroborate the results with numerical case studies.




process

Website Redesign Process: Your Website Redesign Strategy in 5 Steps

Your website is your virtual business card and it often provides the first impression of your business to future customers — making it one of the most important aspects of your company. But if your website still has cobwebs from the 2000s, it’s time to put together a website redesign process. A website redesign process […]

The post Website Redesign Process: Your Website Redesign Strategy in 5 Steps appeared first on WebFX Blog.




process

Process for the preparation of O-desmethyl venlafaxine and intermediate for use therein

The present invention relates to a compound of formula A, wherein R is alkyl. Compound A may be used as an intermediate in the preparation of O-desmethyl venlafaxine or a salt thereof, and the present invention provides such a preparation, as well as a process for preparing the compound of formula A.




process

Process for preparing primary intermediates for dyeing keratin fibers

A process has been developed for preparing 2-methoxymethyl-1,4-benzenediamine (IV-a), other compounds of formula (IV), and the salts thereof, all of which may be used as primary intermediates in compositions for dyeing keratin fibers.




process

Process for producing acesulfame potassium

In one embodiment, the invention relates to processes for producing acesulfame potassium. In one embodiment, the process comprises the step of reacting a first reaction mixture to form an amidosulfamic acid salt such as a trialkyl ammonium amidosulfamic acid salt. The first reaction mixture comprises sulfamic acid, an amine, and smaller amounts, if any, acetic acid, e.g., less than 1 wt % (10000 wppm). In terms of ranges, the first reaction mixture may comprise from 1 wppm to 1 wt % acetic acid. The process further comprises the step of reacting the amidosulfamic acid salt with diketene to form an acetoacetamide salt. In preferred embodiments, the amidosulfamic acid salt formation reaction is conducted at pH levels from 5.5 to 7.0. The process further comprises the step of deriving the acesulfame-K from the acetoacetamide salt.




process

Process for preparing carboxylic acid amides useful in the treatment of muscular disorders

The present invention relates to a process for preparing a compound of formula wherein: R2 is cycloalkyl or alkyl, each of which may be optionally substituted; Y is —CONR3R4, —CN or CO2R5; R3, R4 and R5 are each independently H or alkyl; n is 1 to 6; wherein said process comprising the steps of: (i) treating a compound of formula (IV), where R1 is alkyl, with a compound of formula (V) and forming a compound of formula (IIIb); (ii) treating said compound of formula (IIIb) with a compound of formula (I1) to form a compound of formula (I).




process

Process for the preparation of crystalline forms of agomelatine and novel polymorph thereof

The invention concerns a new process for the preparation of crystalline form of agomelatine from a solution of agomelatine in a solvent, characterized in that the agomelatine is crystallized by instantaneous precipitation from said solution, at a temperature equal to or below −10° C.




process

Process for reductive amination of aliphatic cyanoaldehydes to aliphatic diamines

A process for reductive amination of aliphatic cyanoaldehydes to aliphatic diamines comprising (1) providing a mixture of 1,3-cyanocyclohexane carboxaldehyde and/or 1,4-cyanocyclohexane carboxaldehyde; (2) contacting said mixture with a metal carbonate based solid bed or a weak base anion exchange resin bed at a temperature from 15 to 40 ° C. for a period of at least 1 minute; (3) thereby treating said mixture, wherein said treated mixture has a pH in the range of 6 to 9; (4) feeding said treated mixture, hydrogen, and ammonia into a continuous reductive amination reactor system; (6) contacting said treated mixture, hydrogen, and ammonia with each other in the presence of one or more heterogeneous metal based catalyst systems at a temperature from 80 ° C. to 160 ° C. and a pressure from 700 to 3500 psig; (7) thereby producing one or more cycloaliphatic diamines is provided.




process

Process for the synthesis of arformoterol

The present invention provides a process for preparing a compound of formula (VI) or a salt thereof, the process comprising: (i) reacting 4-methoxyphenyl acetone with an amine of formula (VIII) under conditions of reductive amination to produce a compound of formula (II) or a salt thereof, wherein there is no isolation of an imine intermediate formed during the reductive amination; (ii) condensing the compound (II) or the acid addition salt thereof with an α-haloketone of formula (III) to produce the compound of formula (IV); (iii) reducing the compound (IV) to a compound of formula (V); and (iv) reducing the compound (V) to the compound of formula (VI), wherein the reduction is carried out in the presence of either (1) a hydrogen donating compound in the presence of a hydrogen transfer catalyst; or (2) ammonium formate using a hydrogenation catalyst, wherein R1 and R2 are independently optionally substituted arylalkyl, and Hal is selected from chloro or bromo.




process

Process for preparing alkylated p-phenylenediamines

A process for preparing alkylated p-phenylenediamine having the steps of reacting aniline and nitrobenzene in presence of a complex base catalyst to obtain 4-aminodiphenylamine intermediates, hydrogenating the 4-aminodiphenylamine intermediates to 4-aminodiphenylamine in presence of a hydrogenation catalyst, and reductively alkylating the 4-aminodiphenylamine to alkylated p-phenylenediamine.




process

Aminoethylation process having improved yield of aryloxyalkylene amine compounds and reduced urea by-products

Disclosed is a process for preparing an aryloxyalkylene amine compound via an aminoethylation reaction comprising: a) reacting an aromatic hydroxyl compound in the presence of a basic catalyst with a 2-oxazolidinone compound of the formula II to form an intermediate reaction product; wherein R3 is selected from the group consisting of hydrogen or lower alkyl having 1 to 6 carbon atoms, R4 is selected from the group consisting of hydrogen, straight or branched chain alkyl having from one to six carbon atoms, phenyl, alkaryl, or arylalkyl; and b) reacting the intermediate product of step a) with a polyalkylene polyamine.




process

Process for the conversion of aliphatic cyclic amines to aliphatic diamines

A process for conversion of aliphatic bicyclic amines to aliphatic diamines including contacting one or more bicyclic amines selected from the group consisting of 3-azabicyclo[3.3.1]nonane and azabicyclo[3.3.1]non-2-ene with ammonia and hydrogen, and alcohols in the presence of heterogeneous metal based catalyst systems, a metal selected from the group consisting of Co, Ni, Ru, Fe, Cu, Re, Pd, and their oxides at a temperature from 140° C. to 200° C. and a pressure from 1540 to 1735 psig for at least one hour reactor systems; forming a product mixture comprising aliphatic diamine(s), bicyclic amine(s), ammonia, hydrogen, and alcohol(s); removing said product mixture from the reactor system; removing at least some of the ammonia, hydrogen, water, alcohols, bicyclic amines from said product mixture; thereby separating the aliphatic diamines from said product mixture.




process

Process for making ethoxylated amine compounds

An improved process for making ethoxylated amine compounds such as ethanolamines. The improvement comprises the addition of an acid to the amine compound prior to the addition of ethylene oxide to a reactor wherein the ethoxylated amine compound is prepared. The improvement reduces the concentration of undesirable glycol ether and/or vinyl ether ethoxylate byproducts which may contribute to undesirable properties, such as color and foaming, of the ethoxylated amine compounds.




process

Best match processing mode of decision tables

An input combination of at least one condition value to be evaluated against at least one rule of a decision table is received. The at least one rule includes at least one condition and the rule is associated with a result. The at least one rule is evaluated against the input combination to determine conditions fulfilled for the at least one condition value. In one aspect, a rule from the at least one rule that best matches the input combination is determined and a result associated with the rule that best matches the input combination is outputted.




process

Process analysis

An apparatus and method are disclosed for analysing a process. An exemplary method includes: generating a process template; and determining a probabilistic model specifying the process template. The method can include use of task nodes for tasks of the process; observables nodes for observables that may be caused by performance of the tasks; and a background activities node, wherein observables may further be caused by background activities of the background node. The method can include use of task nodes for tasks of the process; observables nodes for observables that may be caused by performance of the tasks; and a background activities node, observables may further be caused by background activities of the background node. The method can include measuring values of an observable corresponding to one of the observables nodes; and updating a probabilistic estimate of the process state using the measured values.




process

Correlating data from multiple business processes to a business process scenario

The present disclosure involves systems, software, and computer-implemented methods for providing process intelligence by correlating events from multiple business process systems to a single business scenario using configurable correlation strategies. An example method includes identifying a raw event associated with a sending business process and a receiving business process, identifying a sending business process attribute associated with the sending business process and a receiving business process attribute associated with the receiving business process, determining a correlation strategy for associating the raw event with a business scenario instance, the determination based at least in part on the sending business process attribute and the receiving business process attribute, and generating a visibility scenario event from the raw event according to the correlation strategy, the visibility scenario event associated with the business scenario instance.




process

Process for fractionating crude triglyceride oil

The present invention relates to an improved process for fractionating triglyceride oil. The process according to the present invention attains a reproducible crystallization by introducing a controlled temperature profile and ensuing crystal development that reduce the amount of entrapped olein inside the crystals or crystal aggregates. The process of the present invention may be used to fractionate vegetable oils such as palm oil or its blends with other palm oil products or edible vegetable oils.




process

Process for the production of bio-oil from municipal solid waste

A process for producing bio-oil from municipal solid waste, the process including: a) liquifying municipal solid waste, to obtain a mixture containing an oily phase containing bio-oil, a solid phase, and a first aqueous phase; b) treating the first aqueous phase from a) with an adsorbing material, to obtain a second aqueous phase; c) fermenting the second aqueous phase from b), to obtain a biomass; d) subjecting the biomass obtained in c) to the liquification a). The bio-oil obtained is advantageously used in the production of biofuels for motor vehicles or for the generation of electric energy or heat.




process

Catalytic processes for preparing estolide base oils

Provided herein are processes for preparing estolides and estolide base oils from fatty acid reactants utilizing catalysts. Further provided herein are processes for preparing carboxylic esters from at least one carboxylic acid reactant and at least one olefin.




process

Process for separation of renewable materials from microorganisms

Methods of separating renewable materials, such as lipids, from microorganisms, such as oleaginous yeasts, may include conditioning cell walls of the microorganisms to form, open or enlarge pores, and removing at least a portion of the renewable material through the pores. These methods may result in delipidated microorganisms with cell walls that are substantially intact and with mesopores. These delipidated microorganisms may be used to produce biofuels.




process

Process for producing biodiesel through lower molecular weight alcohol-targeted cavitation

A method for producing fatty acid alkyl esters from biolipids through transesterification and/or esterification reactions uses a flow-through cavitation device for generating cavitation bubbles in a fluidic reaction medium. The fluidic medium is passed through sequential compartments in the cavitation device having varying diameters and inner surface features to create localized reductions in fluid pressure thus vaporizing volatile alcohols and creating an increased surface area and optimized conditions for the reaction to occur at the gas-liquid interface around the bubbles.




process

Process for making esters

The invention relates to a process for making esters, in particular biodiesel, using heterogeneous catalysts. The invention provides a process for making biodiesel, in particular FAME, which process is versatile and robust. The process of the invention can be carried out continuously, in particular in a fixed bed reactor or a slurry reactor and may be operated in a continuous fashion. In accordance with the invention, the transesterification reaction of triglycerides is carried out using a heterogeneous catalyst that comprises a Group 4 silicate and less than 3 wt. % Na in the presence of at least one acid compound.