bust

Euro Bonds or Bust? Europe Struggling to Find a Joint Approach to the Corona Catastrophe

Faced with a growing economic crisis, many European Union member states are clamoring for the introduction of so-called corona bonds. Just like it was in the euro crisis, though, Germany is opposed. In the end, Berlin may not have a choice. By DER SPIEGEL Staff




bust

Off-duty FDNY firefighter busted for DWI after found sleeping in parked car

Police discovered firefighter Lenzell Ross, 28, snoozing in his parked Tesla on the side of the Belt Parkway near Shore Parkway in Lindenwood about 2:30 a.m., authorities said.




bust

Brooklyn’s McCarren Park bustles even as state goes ‘on pause’ over coronavirus

Friends played touch football and shared sweaty exercise equipment while others kept their distance from others




bust

Cops bust thief trying to steal dozens of shoes and baseball caps from closed Brooklyn Foot Locker

Suspect Donte West, 28, broke into a side door of the shoe store on Pitkin Ave. near Bristol St. in Brownsville about 8:45 a.m. Saturday and loaded up a Chevy Trailblazer with more than three dozen pairs of sneakers and nearly 40 baseball caps as cops arrived, police said.




bust

DEA investigator busted in sting for trying to arrange sex with 14-year-old: officials

Frederick Scheinin, 29, of Sunnyside, Queens, allegedly chatted for months with a federal agent posing as a minor, and now faces charges in Manhattan federal court of attempting to entice a minor and attempting to produce child pornography.




bust

Off-duty FDNY EMT busted for attacking ex-girlfriend in drunken rage, one of three city employees arrested for domestic incidents in eight-hour span

Robert Soto, 33, showed up drunk to his ex-lover’s Morrisania apartment some time before midnight Thursday and got into an argument with the woman. As she tried to escort him out, Soto bashed her head into a metal door, cops said.




bust

Doctor treating COVID-19 patients gambles on clot-busting drug

Doctors caring for the sickest COVID-19 patients are trying new ways to attack the coronavirus. One theory is that they have blood clots in their lungs.




bust

Charles Yu quarantines with disaster blockbusters, Wong Kar-wai and 'Ozark'

The author, most recently, of "Interior Chinatown" opts for "Independence Day," a slew of inspiring novels, "Thor: Ragnarok" and "Ozark."




bust

Letters: Trump keeps campaign promises by building a robust economy

Keeping him in office prevents the left from destroying America with their socialistic ideology, a letter to the editor says.

      




bust

Letters: Robust health care system needed to combat coronavirus threat

Until we have a vaccine, the road to opening is through a health care system which can handle the infection, a letter to the editor says.

       




bust

Oil Crash Busted Broker's Computers and Inflicted Big Losses

An anonymous reader quotes a report from Bloomberg: Syed Shah usually buys and sells stocks and currencies through his Interactive Brokers account, but he couldn't resist trying his hand at some oil trading on April 20, the day prices plunged below zero for the first time ever. The day trader, working from his house in a Toronto suburb, figured he couldn't lose as he spent $2,400 snapping up crude at $3.30 a barrel, and then 50 cents. Then came what looked like the deal of a lifetime: buying 212 futures contracts on West Texas Intermediate for an astonishing penny each. What he didn't know was oil's first trip into negative pricing had broken Interactive Brokers Group Inc. Its software couldn't cope with that pesky minus sign, even though it was always technically possible -- though this was an outlandish idea before the pandemic -- for the crude market to go upside down. Crude was actually around negative $3.70 a barrel when Shah's screen had it at 1 cent. Interactive Brokers never displayed a subzero price to him as oil kept diving to end the day at minus $37.63 a barrel. At midnight, Shah got the devastating news: he owed Interactive Brokers $9 million. He'd started the day with $77,000 in his account. To be clear, investors who were long those oil contracts had a brutal day, regardless of what brokerage they had their account in. What set Interactive Brokers apart, though, is that its customers were flying blind, unable to see that prices had turned negative, or in other cases locked into their investments and blocked from trading. Compounding the problem, and a big reason why Shah lost an unbelievable amount in a few hours, is that the negative numbers also blew up the model Interactive Brokers used to calculate the amount of margin -- aka collateral -- that customers needed to secure their accounts. "It's a $113 million mistake on our part," said Thomas Peterffy, the chairman and founder of Interactive Brokers, in an interview Wednesday. Customers will be made whole, Peterffy said. "We will rebate from our own funds to our customers who were locked in with a long position during the time the price was negative any losses they suffered below zero."

Read more of this story at Slashdot.




bust

Dave & Buster's Greenwood store has an opening date

Dave & Buster's is bringing its restaurant and entertainment complex to Greenwood in April, and the company plans to hire for more than 230 positions.

      




bust

5 ways Kesha and Macklemore crafted a summer blockbuster at Ruoff

A show billed as "The Adventures of Kesha and Macklemore" resembles a summer popcorn movie at Ruoff Home Mortgage Music Center.

       




bust

Letters: Robust health care system needed to combat coronavirus threat

Until we have a vaccine, the road to opening is through a health care system which can handle the infection, a letter to the editor says.

       




bust

A blockbuster Facebook office deal is a make-or-break moment for the future of commercial real estate. 3 leasing experts lay out the stakes.

  • Facebook has been in negotiations for months to lease over 700,000 square feet at the Farley Building on Manhattan's West Side. 
  • Office leasing activity in the city has plummeted, giving the blockbuster deal even more importance as a sign of life in a suddenly lethargic market.
  • The coronavirus has spurred a deep downturn in the economy that is already being felt in the city's commercial real-estate market, prompting a big slowdown in leasing activity.  
  • The rapid expansion of tech in recent years has propelled the city's office market. Real estate execs say that Facebook's big deal is a key barometer. 
  • The crisis also raises questions whether tenants will ever occupy office space the same way as companies and their workforces around the world grow familiar with remote work. 
  • Click here for more BI Prime stories.

Leasing activity in New York City's multi-billion-dollar commercial office market has dropped precipitously as the coronavirus has battered the market and raised questions of when — and even if — tenants can return to the workplace in a post-Covid world.

Amid the growing concerns the crisis will smother what had been robust demand for office space, eyes in the city's real estate industry have turned to a pending blockbuster deal on the West Side that could offer a signal of confidence to the market.

Facebook is in talks to take over 700,000 square feet of space in the Farley Building, a block-long property across Eighth Avenue from Penn Station.

"If that deal happens, then this market will be just fine," said Peter Riguardi, the New York area chairman and president of JLL. "If the deal happens but it's renegotiated, it will be fine, but it will be a trend that every tenant can follow. And if it doesn't happen, I would be very concerned about the market."

Read More: Inside the drama over control of the iconic Chrysler Building: A real-estate tycoon and a prestigious college are renegotiating a critical $150 million deal.

Facebook's NYC real-estate footprint

Last year, Facebook signed on for 1.5 million square feet in the Hudson Yards mega-development just west of the Farley Building, taking space in three new office towers at the project.

For months the $600 billion Silicon Valley-based social media giant has been in negotiations for even more space at the nearby Farley Building, whose interior landlord Vornado Realty Trust is redeveloping to include newly built office and retail space.

Vornado had originally expected to complete the deal with Facebook in early March, according to a source familiar with the negotiations. The talks have continued on as the virus pandemic has brought commerce and social life to a virtual halt. The source expected the lease, which will commit Facebook to pay hundreds of millions of dollars in rent for the space over the life of the lease, to soon be completed.

In a conference call with investors and analysts on Tuesday to discuss Vornado's first-quarter earnings, the company's CEO Steve Roth also hinted that the Facebook deal was still on track.

"There's another large tenant that has been rumored to be that we've been in dialogue with," Roth said, not directly naming the company. "That conversation is going forward aggressively and hopefully maybe even almost complete."

Rapid growth in Big Tech leasing before coronavirus

Recent real-estate decisions by Facebook and other tech companies have worried real-estate executives that they may reconsider their footprint after years of dramatic growth. Facebook on Thursday revealed that the bulk of its over 40,000-person workforce will be asked to work remotely for the remainder of the year, a timeline that appears to show the company is using caution in returning to its footprint.

Read More: Neiman Marcus just filed for bankruptcy, and it could mark a major blow to NYC's glitzy Hudson Yards — one of the most expensive mega-malls in US history. Here's why.

Real-estate executives have expressed concern that tenants may become accustomed to offloading a portion or even the bulk of their workforce to a remote-working model, leading them to drastically reduce their office commitments.

At a minimum, the economic upheaval has appeared to spur a newfound sense of caution in tech companies that have grown rapidly in recent years. Alphabet called off negotiations to expand its San Francisco offices by over 2 million square feet in recent weeks, according to a report from The Information.  

Tech has been a big driver of demand for office space

In recent years the tech industry had become one of the most voracious takers of space in the city, helping to push up commercial rents and spur the construction of new office space.

In 2019, tech firms accounted for 24.5% of the 31.6 million square feet of leasing activity in Manhattan, eclipsing the financial industry as the city's biggest space-taking sector for the first time, according to data from the real estate services and brokerage firm CBRE.

In 2010 tech leasing comprised just 4% of the 24.2 million square feet that was leased in the Manhattan market that year, CBRE said.

"Nothing has buoyed the confidence of landlords more in recent years than tech tenants," said Sacha Zarba, a leasing executive at CBRE who specializes in working with tech firms. "It didn't matter where your building was. If it was attractive to tech, you would stand a good chance to lease your space. If that industry retrenches a bit, it removes a big driver of demand."

The Manhattan office market has slowed rapidly in recent weeks as the virus crisis has battered the economy and shut down daily life.

About 844,000 square feet of space was leased in Manhattan in April, according to CBRE, 64% lower than the five-year monthly average. In the first four months of the year, nearly seven million square feet was leased, a decline of 30% for the same period a year ago. 

So far, however, there are signs that tech continues to snap up space.

After scuttling plans to develop a 25,000 person second headquarters space in Long Island City last year, Amazon purchased 424 Fifth Avenue, a former flagship department store for Lord & Taylor, for nearly $1 billion in March. That property totals about 660,000 square feet. Late last year, before the pandemic hit U.S. shores but had flared in China, Amazon also leased 335,000 square feet at 410 Tenth Avenue.

The commitments of major tech companies absorb millions of square feet in the city, but they also help fuel a larger ecosystem of tenants that occupies an even larger footprint. That means that a decrease in the real estate of just a few big tech players could be multiplied across the market as smaller players in the sector follow suit.

"Those big tech firms do a fantastic job of training and credentialing tech talent on the city," said Matt Harrigan, a co-founder of Company, a space incubator at 335 Madison Avenue that provides offices and community for both startups and more established tech firms. "Google and Facebook spin off talent who start or join other tech ventures that take space. That's what's so important about having the large presence of those companies here."

Have a tip? Contact Daniel Geiger at dgeiger@businessinsider.com or via encrypted messaging app Signal at +1 (646) 352-2884, or Twitter DM at @dangeiger79. You can also contact Business Insider securely via SecureDrop.

SEE ALSO: What to expect when you're back in the office: 7 real-estate experts break down what the transition will look like, and why the workplace may never be the same

SEE ALSO: Major tenants are delaying big leases in NYC as they re-think their office space needs for the post-coronavirus world

SEE ALSO: As WeWork and flex-space rivals stumble, 18 million square feet of space in NYC is at risk. Here's what that means for the real-estate market.

SEE ALSO: BI Prime Edit in Viking Neiman Marcus just filed for bankruptcy, and it could mark a major blow to NYC's glitzy Hudson Yards — one of the most expensive mega-malls in US history. Here's why.

Join the conversation about this story »

NOW WATCH: We tested a machine that brews beer at the push of a button




bust

The White House touts Trump’s deregulation. It’s actually been a bust.

Many of the changes are simply worse for the economy.




bust

17 Indicted in Bust of $32 Million Online Gambling Ring

The online gambling ring allegedly used an offshore website to help book $32 million in illegal sports wagers placed by more than 2,000 bettors in the United States.




bust

Busted! Late-Night Hack Comedian, Jimmy Kimmel Is Forced To Apologize For Sharing Highly Edited Video Of VP Pence To Make Him Look Bad

The following article, Busted! Late-Night Hack Comedian, Jimmy Kimmel Is Forced To Apologize For Sharing Highly Edited Video Of VP Pence To Make Him Look Bad, was first published on 100PercentFedUp.com.

Last night, Jimmy Kimmel, host of the low-rated, late-night Jimmy Kimmel Show, shared a deceptively edited video clip of Vice President Pence delivering PPE to a nursing home. Today, liberal activist Matt McDermott tweeted the videotaped segment on VP Pence that was edited to make the vice president look like he was faking a delivery […]

Continue reading: Busted! Late-Night Hack Comedian, Jimmy Kimmel Is Forced To Apologize For Sharing Highly Edited Video Of VP Pence To Make Him Look Bad ...




bust

The Russian challenge demands a more robust Western strategy

4 June 2015

20150515RussianChallenge.jpg

Photo: AP Photo/Alexander Zemlianichenko

It is now clear that President Putin’s ‘new model Russia’ cannot be constructively accommodated into the international system. The war in Ukraine, in part the result of the West's laissez-faire approach to Russia, demonstrates the need for a new Western strategy towards Russia.

The Russian Challenge - a major new report by six authors from the Russia and Eurasia Programme at Chatham House - argues that a new strategy must recognise that:                  

  • The decline of the Russian economy, the costs of confrontation and the rise of China mean that the Putin regime is now facing the most serious challenge of its 15 years in power.  The West has neither the wish nor the means to promote regime change in Russia. But Western countries need to consider the possible consequences of a chaotic end to the Putin system.             
  • A critical element in the new geo-economic competition between the West and Russia is the extent of Western support for Ukraine, whose reconstruction as an effective sovereign state, capable of standing up for itself, is crucial. This will require much greater resources than have been invested up until now.                  
  • Russia has rapidly developed its armed forces and information warfare capabilities since the war in Georgia in 2008. The West must invest in defensive strategic communications and media support to counter the Kremlin’s false narratives and restore its conventional deterrent capabilities as a matter of urgency. In particular, NATO needs to demonstrate that the response to ‘ambiguous’ or ‘hybrid’ war will be robust.                  
  • Sanctions are exerting economic pressure on the Russian leadership and should remain in place until Ukraine’s territorial integrity is properly restored. In particular, it is self-defeating to link the lifting of sanctions solely to implementation of the poorly crafted and inherently fragile Minsk accords.                  
  • While deterrence and constraint are essential in the short term, the West must also prepare for an eventual change of leadership in Russia. There is a reasonable chance that current pressures will incline a future Russian leadership to want to re-engage with the West.

James Nixey, Head of the Russia and Eurasia Programme at Chatham House, said:  

'Pursuing these goals and achieving these objectives will ensure that the West is better prepared for any further deterioration in relations with Russia. The events of the last 18 months have demonstrated conclusively that when dealing with Russia, optimism is not a strategy.'

Editor's notes

Read the report The Russian Challenge from the Russia and Eurasia Programme, Chatham House.

Embargoed until Thursday 4 June, 00:01 BST.

This report will be launched at an event at Chatham House on Friday 5 June.

For all enquiries, please contact the press office.

Contacts

Press Office

+44 (0)20 7957 5739




bust

Robust summarization and inference in proteome-wide label-free quantification

Adriaan Sticker
Apr 22, 2020; 0:RA119.001624v1-mcp.RA119.001624
Research




bust

Mass Spectrometry Based Immunopeptidomics Leads to Robust Predictions of Phosphorylated HLA Class I Ligands [Technological Innovation and Resources]

The presentation of peptides on class I human leukocyte antigen (HLA-I) molecules plays a central role in immune recognition of infected or malignant cells. In cancer, non-self HLA-I ligands can arise from many different alterations, including non-synonymous mutations, gene fusion, cancer-specific alternative mRNA splicing or aberrant post-translational modifications. Identifying HLA-I ligands remains a challenging task that requires either heavy experimental work for in vivo identification or optimized bioinformatics tools for accurate predictions. To date, no HLA-I ligand predictor includes post-translational modifications. To fill this gap, we curated phosphorylated HLA-I ligands from several immunopeptidomics studies (including six newly measured samples) covering 72 HLA-I alleles and retrieved a total of 2,066 unique phosphorylated peptides. We then expanded our motif deconvolution tool to identify precise binding motifs of phosphorylated HLA-I ligands. Our results reveal a clear enrichment of phosphorylated peptides among HLA-C ligands and demonstrate a prevalent role of both HLA-I motifs and kinase motifs on the presentation of phosphorylated peptides. These data further enabled us to develop and validate the first predictor of interactions between HLA-I molecules and phosphorylated peptides.




bust

Robust summarization and inference in proteome-wide label-free quantification [Research]

Label-Free Quantitative mass spectrometry based workflows for differential expression (DE) analysis of proteins impose important challenges on the data analysis due to peptide-specific effects and context dependent missingness of peptide intensities. Peptide-based workflows, like MSqRob, test for DE directly from peptide intensities and outperform summarization methods which first aggregate MS1 peptide intensities to protein intensities before DE analysis. However, these methods are computationally expensive, often hard to understand for the non-specialised end-user, and do not provide protein summaries, which are important for visualisation or downstream processing. In this work, we therefore evaluate state-of-the-art summarization strategies using a benchmark spike-in dataset and discuss why and when these fail compared to the state-of-the-art peptide based model, MSqRob. Based on this evaluation, we propose a novel summarization strategy, MSqRobSum, which estimates MSqRob’s model parameters in a two-stage procedure circumventing the drawbacks of peptide-based workflows. MSqRobSum maintains MSqRob’s superior performance, while providing useful protein expression summaries for plotting and downstream analysis. Summarising peptide to protein intensities considerably reduces the computational complexity, the memory footprint and the model complexity, and makes it easier to disseminate DE inferred on protein summaries. Moreover, MSqRobSum provides a highly modular analysis framework, which provides researchers with full flexibility to develop data analysis workflows tailored towards their specific applications.




bust

Isolation of INS-1-derived cell lines with robust ATP-sensitive K+ channel-dependent and -independent glucose-stimulated insulin secretion

HE Hohmeier
Mar 1, 2000; 49:424-430
Articles




bust

Robustness and Locke's Wingless Gentleman

Our ancestors have made decisions under uncertainty ever since they had to stand and fight or run away, eat this root or that berry, sleep in this cave or under that bush. Our species is distinguished by the extent of deliberate thought preceding decision. Nonetheless, the ability to decide in the face of the unknown was born from primal necessity. Betting is one of the oldest ways of deciding under uncertainty. But you bet you that 'bet' is a subtler concept than one might think.

We all know what it means to make a bet, but just to make sure let's quote the Oxford English Dictionary: "To stake or wager (a sum of money, etc.) in support of an affirmation or on the issue of a forecast." The word has been around for quite a while. Shakespeare used the verb in 1600: "Iohn a Gaunt loued him well, and betted much money on his head." (Henry IV, Pt. 2 iii. ii. 44). Drayton used the noun in 1627 (and he wasn't the first): "For a long while it was an euen bet ... Whether proud Warwick, or the Queene should win."

An even bet is a 50-50 chance, an equal probability of each outcome. But betting is not always a matter of chance. Sometimes the meaning is just the opposite. According to the OED 'You bet' or 'You bet you' are slang expressions meaning 'be assured, certainly'. For instance: "'Can you handle this outfit?' 'You bet,' said the scout." (D.L.Sayers, Lord Peter Views Body, iv. 68). Mark Twain wrote "'I'll get you there on time' - and you bet you he did, too." (Roughing It, xx. 152).

So 'bet' is one of those words whose meaning stretches from one idea all the way to its opposite. Drayton's "even bet" between Warwick and the Queen means that he has no idea who will win. In contrast, Twain's "you bet you" is a statement of certainty. In Twain's or Sayers' usage, it's as though uncertainty combines with moral conviction to produce a definite resolution. This is a dialectic in which doubt and determination form decisiveness.

John Locke may have had something like this in mind when he wrote:

"If we will disbelieve everything, because we cannot certainly know all things; we shall do muchwhat as wisely as he, who would not use his legs, but sit still and perish, because he had no wings to fly." (An Essay Concerning Human Understanding, 1706, I.i.5)

The absurdity of Locke's wingless gentleman starving in his chair leads us to believe, and to act, despite our doubts. The moral imperative of survival sweeps aside the paralysis of uncertainty. The consequence of unabated doubt - paralysis - induces doubt's opposite: decisiveness.

But rational creatures must have some method for reasoning around their uncertainties. Locke does not intend for us to simply ignore our ignorance. But if we have no way to place bets - if the odds simply are unknown - then what are we to do? We cannot "sit still and perish".

This is where the strategy of robustness comes in.

'Robust' means 'Strong and hardy; sturdy; healthy'. By implication, something that is robust is 'not easily damaged or broken, resilient'. A statistical test is robust if it yields 'approximately correct results despite the falsity of certain of the assumptions underlying it' or despite errors in the data. (OED)

A decision is robust if its outcome is satisfactory despite error in the information and understanding which justified or motivated the decision. A robust decision is resilient to surprise, immune to ignorance.

It is no coincidence that the colloquial use of the word 'bet' includes concepts of both chance and certainty. A good bet can tolerate large deviation from certainty, large error of information. A good bet is robust to surprise. 'You bet you' does not mean that the world is certain. It means that the outcome is certain to be acceptable, regardless of how the world turns out. The scout will handle the outfit even if there is a rogue in the ranks; Twain will get there on time despite snags and surprises. A good bet is robust to the unknown. You bet you!


An extended and more formal discussion of these issues can be found elsewhere.




bust

Squirrels and Stock Brokers, Or: Innovation Dilemmas, Robustness and Probability

Decisions are made in order to achieve desirable outcomes. An innovation dilemma arises when a seemingly more attractive option is also more uncertain than other options. In this essay we explore the relation between the innovation dilemma and the robustness of a decision, and the relation between robustness and probability. A decision is robust to uncertainty if it achieves required outcomes despite adverse surprises. A robust decision may differ from the seemingly best option. Furthermore, robust decisions are not based on knowledge of probabilities, but can still be the most likely to succeed.

Squirrels, Stock-Brokers and Their Dilemmas




Decision problems.
Imagine a squirrel nibbling acorns under an oak tree. They're pretty good acorns, though a bit dry. The good ones have already been taken. Over in the distance is a large stand of fine oaks. The acorns there are probably better. But then, other squirrels can also see those trees, and predators can too. The squirrel doesn't need to get fat, but a critical caloric intake is necessary before moving on to other activities. How long should the squirrel forage at this patch before moving to the more promising patch, if at all?

Imagine a hedge fund manager investing in South African diamonds, Australian Uranium, Norwegian Kroners and Singapore semi-conductors. The returns have been steady and good, but not very exciting. A new hi-tech start-up venture has just turned up. It looks promising, has solid backing, and could be very interesting. The manager doesn't need to earn boundless returns, but it is necessary to earn at least a tad more than the competition (who are also prowling around). How long should the manager hold the current portfolio before changing at least some of its components?

These are decision problems, and like many other examples, they share three traits: critical needs must be met; the current situation may or may not be adequate; other alternatives look much better but are much more uncertain. To change, or not to change? What strategy to use in making a decision? What choice is the best bet? Betting is a surprising concept, as we have seen before; can we bet without knowing probabilities?

Solution strategies.
The decision is easy in either of two extreme situations, and their analysis will reveal general conclusions.

One extreme is that the status quo is clearly insufficient. For the squirrel this means that these crinkled rotten acorns won't fill anybody's belly even if one nibbled here all day long. Survival requires trying the other patch regardless of the fact that there may be many other squirrels already there and predators just waiting to swoop down. Similarly, for the hedge fund manager, if other funds are making fantastic profits, then something has to change or the competition will attract all the business.

The other extreme is that the status quo is just fine, thank you. For the squirrel, just a little more nibbling and these acorns will get us through the night, so why run over to unfamiliar oak trees? For the hedge fund manager, profits are better than those of any credible competitor, so uncertain change is not called for.

From these two extremes we draw an important general conclusion: the right answer depends on what you need. To change, or not to change, depends on what is critical for survival. There is no universal answer, like, "Always try to improve" or "If it's working, don't fix it". This is a very general property of decisions under uncertainty, and we will call it preference reversal. The agent's preference between alternatives depends on what the agent needs in order to "survive".

The decision strategy that we have described is attuned to the needs of the agent. The strategy attempts to satisfy the agent's critical requirements. If the status quo would reliably do that, then stay put; if not, then move. Following the work of Nobel Laureate Herbert Simon, we will call this a satisficing decision strategy: one which satisfies a critical requirement.

"Prediction is always difficult, especially of the future." - Robert Storm Petersen

Now let's consider a different decision strategy that squirrels and hedge fund managers might be tempted to use. The agent has obtained information about the two alternatives by signals from the environment. (The squirrel sees grand verdant oaks in the distance, the fund manager hears of a new start up.) Given this information, a prediction can be made (though the squirrel may make this prediction based on instincts and without being aware of making it). Given the best available information, the agent predicts which alternative would yield the better outcome. Using this prediction, the decision strategy is to choose the alternative whose predicted outcome is best. We will call this decision strategy best-model optimization. Note that this decision strategy yields a single universal answer to the question facing the agent. This strategy uses the best information to find the choice that - if that information is correct - will yield the best outcome. Best-model optimization (usually) gives a single "best" decision, unlike the satisficing strategy that returns different answers depending on the agent's needs.

There is an attractive logic - and even perhaps a moral imperative - to use the best information to make the best choice. One should always try to do one's best. But the catch in the argument for best-model optimization is that the best information may actually be grievously wrong. Those fine oak trees might be swarming with insects who've devoured the acorns. Best-model optimization ignores the agent's central dilemma: stay with the relatively well known but modest alternative, or go for the more promising but more uncertain alternative.

"Tsk, tsk, tsk" says our hedge fund manager. "My information already accounts for the uncertainty. I have used a probabilistic asset pricing model to predict the likelihood that my profits will beat the competition for each of the two alternatives."

Probabilistic asset pricing models are good to have. And the squirrel similarly has evolved instincts that reflect likelihoods. But a best-probabilistic-model optimization is simply one type of best-model optimization, and is subject to the same vulnerability to error. The world is full of surprises. The probability functions that are used are quite likely wrong, especially in predicting the rare events that the manager is most concerned to avoid.

Robustness and Probability

Now we come to the truly amazing part of the story. The satisficing strategy does not use any probabilistic information. Nonetheless, in many situations, the satisficing strategy is actually a better bet (or at least not a worse bet), probabilistically speaking, than any other strategy, including best-probabilistic-model optimization. We have no probabilistic information in these situations, but we can still maximize the probability of success (though we won't know the value of this maximum).

When the satisficing decision strategy is the best bet, this is, in part, because it is more robust to uncertainty than another other strategy. A decision is robust to uncertainty if it achieves required outcomes even if adverse surprises occur. In many important situations (though not invariably), more robustness to uncertainty is equivalent to being more likely to succeed or survive. When this is true we say that robustness is a proxy for probability.

A thorough analysis of the proxy property is rather technical. However, we can understand the gist of the idea by considering a simple special case.

Let's continue with the squirrel and hedge fund examples. Suppose we are completely confident about the future value (in calories or dollars) of not making any change (staying put). In contrast, the future value of moving is apparently better though uncertain. If staying put would satisfy our critical requirement, then we are absolutely certain of survival if we do not change. Staying put is completely robust to surprises so the probability of success equals 1 if we stay put, regardless of what happens with the other option. Likewise, if staying put would not satisfy our critical requirement, then we are absolutely certain of failure if we do not change; the probability of success equals 0 if we stay, and moving cannot be worse. Regardless of what probability distribution describes future outcomes if we move, we can always choose the option whose likelihood of success is greater (or at least not worse). This is because staying put is either sure to succeed or sure to fail, and we know which.

This argument can be extended to the more realistic case where the outcome of staying put is uncertain and the outcome of moving, while seemingly better than staying, is much more uncertain. The agent can know which option is more robust to uncertainty, without having to know probability distributions. This implies, in many situations, that the agent can choose the option that is a better bet for survival.

Wrapping Up

The skillful decision maker not only knows a lot, but is also able to deal with conflicting information. We have discussed the innovation dilemma: When choosing between two alternatives, the seemingly better one is also more uncertain.

Animals, people, organizations and societies have developed mechanisms for dealing with the innovation dilemma. The response hinges on tuning the decision to the agent's needs, and robustifying the choice against uncertainty. This choice may or may not coincide with the putative best choice. But what seems best depends on the available - though uncertain - information.

The commendable tendency to do one's best - and to demand the same of others - can lead to putatively optimal decisions that may be more vulnerable to surprise than other decisions that would have been satisfactory. In contrast, the strategy of robustly satisfying critical needs can be a better bet for survival. Consider the design of critical infrastructure: flood protection, nuclear power, communication networks, and so on. The design of such systems is based on vast knowledge and understanding, but also confronts bewildering uncertainties and endless surprises. We must continue to improve our knowledge and understanding, while also improving our ability to manage the uncertainties resulting from the expanding horizon of our efforts. We must identify the critical goals and seek responses that are immune to surprise. 




bust

A woman personifying friendship weeps before the bust of Giovanni Volpato. Engraving by P. Fontana, ca. 1807, after A. Canova.

[Rome?] : [publisher not identified], [1807?]




bust

The dynastic marriage of William of Orange and Mary Stuart: above, they are brought together before a bust of Hercules; below, their wedding in London on 4 November 1677. Etching by R. de Hooghe, 1678.

[The Netherlands] : [Romeyn de Hooghe?], [1678?]




bust

Path-Based Spectral Clustering: Guarantees, Robustness to Outliers, and Fast Algorithms

We consider the problem of clustering with the longest-leg path distance (LLPD) metric, which is informative for elongated and irregularly shaped clusters. We prove finite-sample guarantees on the performance of clustering with respect to this metric when random samples are drawn from multiple intrinsically low-dimensional clusters in high-dimensional space, in the presence of a large number of high-dimensional outliers. By combining these results with spectral clustering with respect to LLPD, we provide conditions under which the Laplacian eigengap statistic correctly determines the number of clusters for a large class of data sets, and prove guarantees on the labeling accuracy of the proposed algorithm. Our methods are quite general and provide performance guarantees for spectral clustering with any ultrametric. We also introduce an efficient, easy to implement approximation algorithm for the LLPD based on a multiscale analysis of adjacency graphs, which allows for the runtime of LLPD spectral clustering to be quasilinear in the number of data points.




bust

Provably robust estimation of modulo 1 samples of a smooth function with applications to phase unwrapping

Consider an unknown smooth function $f: [0,1]^d ightarrow mathbb{R}$, and assume we are given $n$ noisy mod 1 samples of $f$, i.e., $y_i = (f(x_i) + eta_i) mod 1$, for $x_i in [0,1]^d$, where $eta_i$ denotes the noise. Given the samples $(x_i,y_i)_{i=1}^{n}$, our goal is to recover smooth, robust estimates of the clean samples $f(x_i) mod 1$. We formulate a natural approach for solving this problem, which works with angular embeddings of the noisy mod 1 samples over the unit circle, inspired by the angular synchronization framework. This amounts to solving a smoothness regularized least-squares problem -- a quadratically constrained quadratic program (QCQP) -- where the variables are constrained to lie on the unit circle. Our proposed approach is based on solving its relaxation, which is a trust-region sub-problem and hence solvable efficiently. We provide theoretical guarantees demonstrating its robustness to noise for adversarial, as well as random Gaussian and Bernoulli noise models. To the best of our knowledge, these are the first such theoretical results for this problem. We demonstrate the robustness and efficiency of our proposed approach via extensive numerical simulations on synthetic data, along with a simple least-squares based solution for the unwrapping stage, that recovers the original samples of $f$ (up to a global shift). It is shown to perform well at high levels of noise, when taking as input the denoised modulo $1$ samples. Finally, we also consider two other approaches for denoising the modulo 1 samples that leverage tools from Riemannian optimization on manifolds, including a Burer-Monteiro approach for a semidefinite programming relaxation of our formulation. For the two-dimensional version of the problem, which has applications in synthetic aperture radar interferometry (InSAR), we are able to solve instances of real-world data with a million sample points in under 10 seconds, on a personal laptop.




bust

Robust Asynchronous Stochastic Gradient-Push: Asymptotically Optimal and Network-Independent Performance for Strongly Convex Functions

We consider the standard model of distributed optimization of a sum of functions $F(mathbf z) = sum_{i=1}^n f_i(mathbf z)$, where node $i$ in a network holds the function $f_i(mathbf z)$. We allow for a harsh network model characterized by asynchronous updates, message delays, unpredictable message losses, and directed communication among nodes. In this setting, we analyze a modification of the Gradient-Push method for distributed optimization, assuming that (i) node $i$ is capable of generating gradients of its function $f_i(mathbf z)$ corrupted by zero-mean bounded-support additive noise at each step, (ii) $F(mathbf z)$ is strongly convex, and (iii) each $f_i(mathbf z)$ has Lipschitz gradients. We show that our proposed method asymptotically performs as well as the best bounds on centralized gradient descent that takes steps in the direction of the sum of the noisy gradients of all the functions $f_1(mathbf z), ldots, f_n(mathbf z)$ at each step.




bust

Exact Guarantees on the Absence of Spurious Local Minima for Non-negative Rank-1 Robust Principal Component Analysis

This work is concerned with the non-negative rank-1 robust principal component analysis (RPCA), where the goal is to recover the dominant non-negative principal components of a data matrix precisely, where a number of measurements could be grossly corrupted with sparse and arbitrary large noise. Most of the known techniques for solving the RPCA rely on convex relaxation methods by lifting the problem to a higher dimension, which significantly increase the number of variables. As an alternative, the well-known Burer-Monteiro approach can be used to cast the RPCA as a non-convex and non-smooth $ell_1$ optimization problem with a significantly smaller number of variables. In this work, we show that the low-dimensional formulation of the symmetric and asymmetric positive rank-1 RPCA based on the Burer-Monteiro approach has benign landscape, i.e., 1) it does not have any spurious local solution, 2) has a unique global solution, and 3) its unique global solution coincides with the true components. An implication of this result is that simple local search algorithms are guaranteed to achieve a zero global optimality gap when directly applied to the low-dimensional formulation. Furthermore, we provide strong deterministic and probabilistic guarantees for the exact recovery of the true principal components. In particular, it is shown that a constant fraction of the measurements could be grossly corrupted and yet they would not create any spurious local solution.




bust

A note on the “L-logistic regression models: Prior sensitivity analysis, robustness to outliers and applications”

Saralees Nadarajah, Yuancheng Si.

Source: Brazilian Journal of Probability and Statistics, Volume 34, Number 1, 183--187.

Abstract:
Da Paz, Balakrishnan and Bazan [Braz. J. Probab. Stat. 33 (2019), 455–479] introduced the L-logistic distribution, studied its properties including estimation issues and illustrated a data application. This note derives a closed form expression for moment properties of the distribution. Some computational issues are discussed.




bust

Robust Bayesian model selection for heavy-tailed linear regression using finite mixtures

Flávio B. Gonçalves, Marcos O. Prates, Victor Hugo Lachos.

Source: Brazilian Journal of Probability and Statistics, Volume 34, Number 1, 51--70.

Abstract:
In this paper, we present a novel methodology to perform Bayesian model selection in linear models with heavy-tailed distributions. We consider a finite mixture of distributions to model a latent variable where each component of the mixture corresponds to one possible model within the symmetrical class of normal independent distributions. Naturally, the Gaussian model is one of the possibilities. This allows for a simultaneous analysis based on the posterior probability of each model. Inference is performed via Markov chain Monte Carlo—a Gibbs sampler with Metropolis–Hastings steps for a class of parameters. Simulated examples highlight the advantages of this approach compared to a segregated analysis based on arbitrarily chosen model selection criteria. Examples with real data are presented and an extension to censored linear regression is introduced and discussed.




bust

L-Logistic regression models: Prior sensitivity analysis, robustness to outliers and applications

Rosineide F. da Paz, Narayanaswamy Balakrishnan, Jorge Luis Bazán.

Source: Brazilian Journal of Probability and Statistics, Volume 33, Number 3, 455--479.

Abstract:
Tadikamalla and Johnson [ Biometrika 69 (1982) 461–465] developed the $L_{B}$ distribution to variables with bounded support by considering a transformation of the standard Logistic distribution. In this manuscript, a convenient parametrization of this distribution is proposed in order to develop regression models. This distribution, referred to here as L-Logistic distribution, provides great flexibility and includes the uniform distribution as a particular case. Several properties of this distribution are studied, and a Bayesian approach is adopted for the parameter estimation. Simulation studies, considering prior sensitivity analysis, recovery of parameters and comparison of algorithms, and robustness to outliers are all discussed showing that the results are insensitive to the choice of priors, efficiency of the algorithm MCMC adopted, and robustness of the model when compared with the beta distribution. Applications to estimate the vulnerability to poverty and to explain the anxiety are performed. The results to applications show that the L-Logistic regression models provide a better fit than the corresponding beta regression models.




bust

Failure rate of Birnbaum–Saunders distributions: Shape, change-point, estimation and robustness

Emilia Athayde, Assis Azevedo, Michelli Barros, Víctor Leiva.

Source: Brazilian Journal of Probability and Statistics, Volume 33, Number 2, 301--328.

Abstract:
The Birnbaum–Saunders (BS) distribution has been largely studied and applied. A random variable with BS distribution is a transformation of another random variable with standard normal distribution. Generalized BS distributions are obtained when the normally distributed random variable is replaced by another symmetrically distributed random variable. This allows us to obtain a wide class of positively skewed models with lighter and heavier tails than the BS model. Its failure rate admits several shapes, including the unimodal case, with its change-point being able to be used for different purposes. For example, to establish the reduction in a dose, and then in the cost of the medical treatment. We analyze the failure rates of generalized BS distributions obtained by the logistic, normal and Student-t distributions, considering their shape and change-point, estimating them, evaluating their robustness, assessing their performance by simulations, and applying the results to real data from different areas.




bust

Bayesian robustness to outliers in linear regression and ratio estimation

Alain Desgagné, Philippe Gagnon.

Source: Brazilian Journal of Probability and Statistics, Volume 33, Number 2, 205--221.

Abstract:
Whole robustness is a nice property to have for statistical models. It implies that the impact of outliers gradually vanishes as they approach plus or minus infinity. So far, the Bayesian literature provides results that ensure whole robustness for the location-scale model. In this paper, we make two contributions. First, we generalise the results to attain whole robustness in simple linear regression through the origin, which is a necessary step towards results for general linear regression models. We allow the variance of the error term to depend on the explanatory variable. This flexibility leads to the second contribution: we provide a simple Bayesian approach to robustly estimate finite population means and ratios. The strategy to attain whole robustness is simple since it lies in replacing the traditional normal assumption on the error term by a super heavy-tailed distribution assumption. As a result, users can estimate the parameters as usual, using the posterior distribution.




bust

A Distributionally Robust Area Under Curve Maximization Model. (arXiv:2002.07345v2 [math.OC] UPDATED)

Area under ROC curve (AUC) is a widely used performance measure for classification models. We propose two new distributionally robust AUC maximization models (DR-AUC) that rely on the Kantorovich metric and approximate the AUC with the hinge loss function. We consider the two cases with respectively fixed and variable support for the worst-case distribution. We use duality theory to reformulate the DR-AUC models and derive tractable convex optimization problems. The numerical experiments show that the proposed DR-AUC models -- benchmarked with the standard deterministic AUC and the support vector machine models - perform better in general and in particular improve the worst-case out-of-sample performance over the majority of the considered datasets, thereby showing their robustness. The results are particularly encouraging since our numerical experiments are conducted with training sets of small size which have been known to be conducive to low out-of-sample performance.




bust

Robust location estimators in regression models with covariates and responses missing at random. (arXiv:2005.03511v1 [stat.ME])

This paper deals with robust marginal estimation under a general regression model when missing data occur in the response and also in some of covariates. The target is a marginal location parameter which is given through an $M-$functional. To obtain robust Fisher--consistent estimators, properly defined marginal distribution function estimators are considered. These estimators avoid the bias due to missing values by assuming a missing at random condition. Three methods are considered to estimate the marginal distribution function which allows to obtain the $M-$location of interest: the well-known inverse probability weighting, a convolution--based method that makes use of the regression model and an augmented inverse probability weighting procedure that prevents against misspecification. The robust proposed estimators and the classical ones are compared through a numerical study under different missing models including clean and contaminated samples. We illustrate the estimators behaviour under a nonlinear model. A real data set is also analysed.




bust

Distributional Robustness of K-class Estimators and the PULSE. (arXiv:2005.03353v1 [econ.EM])

In causal settings, such as instrumental variable settings, it is well known that estimators based on ordinary least squares (OLS) can yield biased and non-consistent estimates of the causal parameters. This is partially overcome by two-stage least squares (TSLS) estimators. These are, under weak assumptions, consistent but do not have desirable finite sample properties: in many models, for example, they do not have finite moments. The set of K-class estimators can be seen as a non-linear interpolation between OLS and TSLS and are known to have improved finite sample properties. Recently, in causal discovery, invariance properties such as the moment criterion which TSLS estimators leverage have been exploited for causal structure learning: e.g., in cases, where the causal parameter is not identifiable, some structure of the non-zero components may be identified, and coverage guarantees are available. Subsequently, anchor regression has been proposed to trade-off invariance and predictability. The resulting estimator is shown to have optimal predictive performance under bounded shift interventions. In this paper, we show that the concepts of anchor regression and K-class estimators are closely related. Establishing this connection comes with two benefits: (1) It enables us to prove robustness properties for existing K-class estimators when considering distributional shifts. And, (2), we propose a novel estimator in instrumental variable settings by minimizing the mean squared prediction error subject to the constraint that the estimator lies in an asymptotically valid confidence region of the causal parameter. We call this estimator PULSE (p-uncorrelated least squares estimator) and show that it can be computed efficiently, even though the underlying optimization problem is non-convex. We further prove that it is consistent.




bust

Towards Frequency-Based Explanation for Robust CNN. (arXiv:2005.03141v1 [cs.LG])

Current explanation techniques towards a transparent Convolutional Neural Network (CNN) mainly focuses on building connections between the human-understandable input features with models' prediction, overlooking an alternative representation of the input, the frequency components decomposition. In this work, we present an analysis of the connection between the distribution of frequency components in the input dataset and the reasoning process the model learns from the data. We further provide quantification analysis about the contribution of different frequency components toward the model's prediction. We show that the vulnerability of the model against tiny distortions is a result of the model is relying on the high-frequency features, the target features of the adversarial (black and white-box) attackers, to make the prediction. We further show that if the model develops stronger association between the low-frequency component with true labels, the model is more robust, which is the explanation of why adversarially trained models are more robust against tiny distortions.




bust

Combustion emissions

Schofield, Keith.
9780128191279 (electronic bk.)




bust

Robust sparse covariance estimation by thresholding Tyler’s M-estimator

John Goes, Gilad Lerman, Boaz Nadler.

Source: The Annals of Statistics, Volume 48, Number 1, 86--110.

Abstract:
Estimating a high-dimensional sparse covariance matrix from a limited number of samples is a fundamental task in contemporary data analysis. Most proposals to date, however, are not robust to outliers or heavy tails. Toward bridging this gap, in this work we consider estimating a sparse shape matrix from $n$ samples following a possibly heavy-tailed elliptical distribution. We propose estimators based on thresholding either Tyler’s M-estimator or its regularized variant. We prove that in the joint limit as the dimension $p$ and the sample size $n$ tend to infinity with $p/n ogamma>0$, our estimators are minimax rate optimal. Results on simulated data support our theoretical analysis.




bust

Robust elastic net estimators for variable selection and identification of proteomic biomarkers

Gabriela V. Cohen Freue, David Kepplinger, Matías Salibián-Barrera, Ezequiel Smucler.

Source: The Annals of Applied Statistics, Volume 13, Number 4, 2065--2090.

Abstract:
In large-scale quantitative proteomic studies, scientists measure the abundance of thousands of proteins from the human proteome in search of novel biomarkers for a given disease. Penalized regression estimators can be used to identify potential biomarkers among a large set of molecular features measured. Yet, the performance and statistical properties of these estimators depend on the loss and penalty functions used to define them. Motivated by a real plasma proteomic biomarkers study, we propose a new class of penalized robust estimators based on the elastic net penalty, which can be tuned to keep groups of correlated variables together in the selected model and maintain robustness against possible outliers. We also propose an efficient algorithm to compute our robust penalized estimators and derive a data-driven method to select the penalty term. Our robust penalized estimators have very good robustness properties and are also consistent under certain regularity conditions. Numerical results show that our robust estimators compare favorably to other robust penalized estimators. Using our proposed methodology for the analysis of the proteomics data, we identify new potentially relevant biomarkers of cardiac allograft vasculopathy that are not found with nonrobust alternatives. The selected model is validated in a new set of 52 test samples and achieves an area under the receiver operating characteristic (AUC) of 0.85.




bust

Robust regression via mutivariate regression depth

Chao Gao.

Source: Bernoulli, Volume 26, Number 2, 1139--1170.

Abstract:
This paper studies robust regression in the settings of Huber’s $epsilon$-contamination models. We consider estimators that are maximizers of multivariate regression depth functions. These estimators are shown to achieve minimax rates in the settings of $epsilon$-contamination models for various regression problems including nonparametric regression, sparse linear regression, reduced rank regression, etc. We also discuss a general notion of depth function for linear operators that has potential applications in robust functional linear regression.




bust

Robust estimation of mixing measures in finite mixture models

Nhat Ho, XuanLong Nguyen, Ya’acov Ritov.

Source: Bernoulli, Volume 26, Number 2, 828--857.

Abstract:
In finite mixture models, apart from underlying mixing measure, true kernel density function of each subpopulation in the data is, in many scenarios, unknown. Perhaps the most popular approach is to choose some kernel functions that we empirically believe our data are generated from and use these kernels to fit our models. Nevertheless, as long as the chosen kernel and the true kernel are different, statistical inference of mixing measure under this setting will be highly unstable. To overcome this challenge, we propose flexible and efficient robust estimators of the mixing measure in these models, which are inspired by the idea of minimum Hellinger distance estimator, model selection criteria, and superefficiency phenomenon. We demonstrate that our estimators consistently recover the true number of components and achieve the optimal convergence rates of parameter estimation under both the well- and misspecified kernel settings for any fixed bandwidth. These desirable asymptotic properties are illustrated via careful simulation studies with both synthetic and real data.




bust

Robust modifications of U-statistics and applications to covariance estimation problems

Stanislav Minsker, Xiaohan Wei.

Source: Bernoulli, Volume 26, Number 1, 694--727.

Abstract:
Let $Y$ be a $d$-dimensional random vector with unknown mean $mu $ and covariance matrix $Sigma $. This paper is motivated by the problem of designing an estimator of $Sigma $ that admits exponential deviation bounds in the operator norm under minimal assumptions on the underlying distribution, such as existence of only 4th moments of the coordinates of $Y$. To address this problem, we propose robust modifications of the operator-valued U-statistics, obtain non-asymptotic guarantees for their performance, and demonstrate the implications of these results to the covariance estimation problem under various structural assumptions.




bust

Needles and straw in a haystack: Robust confidence for possibly sparse sequences

Eduard Belitser, Nurzhan Nurushev.

Source: Bernoulli, Volume 26, Number 1, 191--225.

Abstract:
In the general signal$+$noise (allowing non-normal, non-independent observations) model, we construct an empirical Bayes posterior which we then use for uncertainty quantification for the unknown, possibly sparse, signal. We introduce a novel excessive bias restriction (EBR) condition, which gives rise to a new slicing of the entire space that is suitable for uncertainty quantification. Under EBR and some mild exchangeable exponential moment condition on the noise, we establish the local (oracle) optimality of the proposed confidence ball. Without EBR, we propose another confidence ball of full coverage, but its radius contains an additional $sigma n^{1/4}$-term. In passing, we also get the local optimal results for estimation , posterior contraction problems, and the problem of weak recovery of sparsity structure . Adaptive minimax results (also for the estimation and posterior contraction problems) over various sparsity classes follow from our local results.




bust

As Trump returns to the road, some Democrats want to bust Biden out of his basement

While President Donald Trump traveled to the battleground state of Arizona this week, his Democratic opponent for the White House, Joe Biden, campaigned from his basement as he has done throughout the coronavirus pandemic. The freeze on in-person campaigning during the outbreak has had an upside for Biden, giving the former vice president more time to court donors and shielding him from on-the-trail gaffes. "I personally would like to see him out more because he's in his element when he's meeting people," said Tom Sacks-Wilner, a fundraiser for Biden who is on the campaign's finance committee.





bust

A New Bayesian Approach to Robustness Against Outliers in Linear Regression

Philippe Gagnon, Alain Desgagné, Mylène Bédard.

Source: Bayesian Analysis, Volume 15, Number 2, 389--414.

Abstract:
Linear regression is ubiquitous in statistical analysis. It is well understood that conflicting sources of information may contaminate the inference when the classical normality of errors is assumed. The contamination caused by the light normal tails follows from an undesirable effect: the posterior concentrates in an area in between the different sources with a large enough scaling to incorporate them all. The theory of conflict resolution in Bayesian statistics (O’Hagan and Pericchi (2012)) recommends to address this problem by limiting the impact of outliers to obtain conclusions consistent with the bulk of the data. In this paper, we propose a model with super heavy-tailed errors to achieve this. We prove that it is wholly robust, meaning that the impact of outliers gradually vanishes as they move further and further away from the general trend. The super heavy-tailed density is similar to the normal outside of the tails, which gives rise to an efficient estimation procedure. In addition, estimates are easily computed. This is highlighted via a detailed user guide, where all steps are explained through a simulated case study. The performance is shown using simulation. All required code is given.




bust

Hierarchical Normalized Completely Random Measures for Robust Graphical Modeling

Andrea Cremaschi, Raffaele Argiento, Katherine Shoemaker, Christine Peterson, Marina Vannucci.

Source: Bayesian Analysis, Volume 14, Number 4, 1271--1301.

Abstract:
Gaussian graphical models are useful tools for exploring network structures in multivariate normal data. In this paper we are interested in situations where data show departures from Gaussianity, therefore requiring alternative modeling distributions. The multivariate $t$ -distribution, obtained by dividing each component of the data vector by a gamma random variable, is a straightforward generalization to accommodate deviations from normality such as heavy tails. Since different groups of variables may be contaminated to a different extent, Finegold and Drton (2014) introduced the Dirichlet $t$ -distribution, where the divisors are clustered using a Dirichlet process. In this work, we consider a more general class of nonparametric distributions as the prior on the divisor terms, namely the class of normalized completely random measures (NormCRMs). To improve the effectiveness of the clustering, we propose modeling the dependence among the divisors through a nonparametric hierarchical structure, which allows for the sharing of parameters across the samples in the data set. This desirable feature enables us to cluster together different components of multivariate data in a parsimonious way. We demonstrate through simulations that this approach provides accurate graphical model inference, and apply it to a case study examining the dependence structure in radiomics data derived from The Cancer Imaging Atlas.