algorithm

Subgradient-projection-based stable phase-retrieval algorithm for X-ray ptychography

X-ray ptychography is a lensless imaging technique that visualizes the nano­structure of a thick specimen which cannot be observed with an electron microscope. It reconstructs a complex-valued refractive index of the specimen from observed diffraction patterns. This reconstruction problem is called phase retrieval (PR). For further improvement in the imaging capability, including expansion of the depth of field, various PR algorithms have been proposed. Since a high-quality PR method is built upon a base PR algorithm such as ePIE, developing a well performing base PR algorithm is important. This paper proposes an improved iterative algorithm named CRISP. It exploits subgradient projection which allows adaptive step size and can be expected to avoid yielding a poor image. The proposed algorithm was compared with ePIE, which is a simple and fast-convergence algorithm, and its modified algorithm, rPIE. The experiments confirmed that the proposed method improved the reconstruction performance for both simulation and real data.




algorithm

Ptychographic phase retrieval via a deep-learning-assisted iterative algorithm

Ptychography is a powerful computational imaging technique with microscopic imaging capability and adaptability to various specimens. To obtain an imaging result, it requires a phase-retrieval algorithm whose performance directly determines the imaging quality. Recently, deep neural network (DNN)-based phase retrieval has been proposed to improve the imaging quality from the ordinary model-based iterative algorithms. However, the DNN-based methods have some limitations because of the sensitivity to changes in experimental conditions and the difficulty of collecting enough measured specimen images for training the DNN. To overcome these limitations, a ptychographic phase-retrieval algorithm that combines model-based and DNN-based approaches is proposed. This method exploits a DNN-based denoiser to assist an iterative algorithm like ePIE in finding better reconstruction images. This combination of DNN and iterative algorithms allows the measurement model to be explicitly incorporated into the DNN-based approach, improving its robustness to changes in experimental conditions. Furthermore, to circumvent the difficulty of collecting the training data, it is proposed that the DNN-based denoiser be trained without using actual measured specimen images but using a formula-driven supervised approach that systemically generates synthetic images. In experiments using simulation based on a hard X-ray ptychographic measurement system, the imaging capability of the proposed method was evaluated by comparing it with ePIE and rPIE. These results demonstrated that the proposed method was able to reconstruct higher-spatial-resolution images with half the number of iterations required by ePIE and rPIE, even for data with low illumination intensity. Also, the proposed method was shown to be robust to its hyperparameters. In addition, the proposed method was applied to ptychographic datasets of a Simens star chart and ink toner particles measured at SPring-8 BL24XU, which confirmed that it can successfully reconstruct images from measurement scans with a lower overlap ratio of the illumination regions than is required by ePIE and rPIE.




algorithm

distect: automatic sample-position tracking for X-ray experiments using computer vision algorithms

Soft X-ray spectroscopy is an important technique for measuring the fundamental properties of materials. However, for measurements of samples in the sub-millimetre range, many experimental setups show limitations. Position drifts on the order of hundreds of micrometres during thermal stabilization of the system can last for hours of expensive beam time. To compensate for drifts, sample tracking and feedback systems must be used. However, in complex sample environments where sample access is very limited, many existing solutions cannot be applied. In this work, we apply a robust computer vision algorithm to automatically track and readjust the sample position in the dozens of micrometres range. Our approach is applied in a complex sample environment, where the sample is in an ultra-high vacuum chamber, surrounded by cooled thermal shields to reach sample temperatures down to 2.5 K and in the center of a superconducting split coil. Our implementation allows sample-position tracking and adjustment in the vertical direction since this is the dimension where drifts occur during sample temperature change in our setup. The approach can be easily extended to 2D. The algorithm enables a factor of ten improvement in the overlap of a series of X-ray absorption spectra in a sample with a vertical size down to 70 µm. This solution can be used in a variety of experimental stations, where optical access is available and sample access by other means is reduced.




algorithm

A general Bayesian algorithm for the autonomous alignment of beamlines

Autonomous methods to align beamlines can decrease the amount of time spent on diagnostics, and also uncover better global optima leading to better beam quality. The alignment of these beamlines is a high-dimensional expensive-to-sample optimization problem involving the simultaneous treatment of many optical elements with correlated and nonlinear dynamics. Bayesian optimization is a strategy of efficient global optimization that has proved successful in similar regimes in a wide variety of beamline alignment applications, though it has typically been implemented for particular beamlines and optimization tasks. In this paper, we present a basic formulation of Bayesian inference and Gaussian process models as they relate to multi-objective Bayesian optimization, as well as the practical challenges presented by beamline alignment. We show that the same general implementation of Bayesian optimization with special consideration for beamline alignment can quickly learn the dynamics of particular beamlines in an online fashion through hyperparameter fitting with no prior information. We present the implementation of a concise software framework for beamline alignment and test it on four different optimization problems for experiments on X-ray beamlines at the National Synchrotron Light Source II and the Advanced Light Source, and an electron beam at the Accelerator Test Facility, along with benchmarking on a simulated digital twin. We discuss new applications of the framework, and the potential for a unified approach to beamline alignment at synchrotron facilities.




algorithm

The general equation of δ direct methods and the novel SMAR algorithm residuals using the absolute value of ρ and the zero conversion of negative ripples

The general equation of the δ direct methods is established and applied in its difference form to the definition of one of the two residuals that constitute the SMAR phasing algorithm. These two residuals use the absolute value of ρ and/or the zero conversion of negative Fourier ripples (≥50% of the unit-cell volume). Alternatively, when solved for ρ, the general equation provides a simple derivation of the already known δM tangent formula.




algorithm

AI in the workplace: The good, the bad, and the algorithmic

While AI can liberate us from tedious tasks and even eliminate human error, it's crucial to remember its weaknesses and the unique capabilities that humans bring to the table




algorithm

132: The Actual Mind of the Algorithm

Grey released his new video, Myke has collaboration questions, and they both dig in to what upload gaps can mean for content creators.




algorithm

Matt Michel: 8 steps to beating the big tech algorithm

If you rely on search engine marketing for calls, you are putting your company at risk because you are relying on the unreliable. Here are eight steps to wean your company off big tech dependency.




algorithm

Algorithms Won’t Solve All Your Pricing Problems

Marco Bertini, marketing professor at Esade Business School, says more and more companies are turning to pricing algorithms to maximize profits. But many are unaware of a big downside. The constant price shifts can hurt the perception of the brand and its products. He warns that overreliance on artificial intelligence and machine learning without considering human psychology can cause serious damage to the customer relationship. And he outlines steps managers should take, including implementing guardrails, overrides, and better communication tactics. With London Business School professor Oded Koenigsberg, Bertini wrote the HBR article "The Pitfalls of Pricing Algorithms."




algorithm

CAST and LIRIS establish partnership to apply advanced graph visualization algorithms

The goal of the collaboration is to develop advanced algorithms that yield more efficient and user-friendly visual representations of application structures




algorithm

Recap of the “Gephi Week” at SciencePo: inquiring the community detection algorithm of Gephi

The CNRS, the Gephi Consortium and the University of Aalborg... more




algorithm

New Auphonic Website, Free Advanced Algorithms and More Examples

To start a new decade of automatic audio post production with Auphonic, we are happy to launch a few updates:

New Website Design

Opening the new homepage today, you might have noticed that our website looked different from what you had been used to before. Keeping our customers’ feedback from last year in mind, we designed a new vision for Auphonic.

Our new website features a refreshed look with an improved, more engaging, and functional user experience. Moreover, a more straightforward, intuitive, and accessible navigation will give you a seamless workflow and a comfortable exploration of Auphonic’s features.
We hope it will be easier to explore the diversity of applications that Auphonic has. In the end, however, as before, you will have the same full functionality of Auphonic available to you and some extra features if you are using our paid packages or subscriptions.

Take a look yourself: New Auphonic Landing Page

Free Access to our Advanced and Beta Algorithms

In the past, only paying Auphonic users had access to the advanced algorithm parameters, to multitrack advanced audio algorithms, and to our Dynamic Denoising and AutoEQ beta models.

We now enabled all advanced algorithms for free users, and you can use them for 2 hours of audio free each month!

Using the Dynamic Denoiser, you can define whether Auphonic should remove only static or also fast-changing noises and if we should keep or eliminate music. For even greater speech intelligibility control, it is possible to manually adjust the amount of denoising to strike the perfect balance between clarity and ambiance.

The AutoEQ automatically analyzes and optimizes the frequency spectrum of a voice recording to remove sibilance (De-Esser) and to create a clear, warm, and pleasant sound.
The equalization of multi-speaker audio can be complex and time-consuming, as each voice requires its own unique frequency spectrum equalization. Our AutoEQ simplifies this process by creating separate, time-dependent EQ profiles for each speaker, ensuring a consistent and pleasant sound output despite any changes in the voices during the recording.

Our advanced algorithm parameters help you to meet all common audio specifications of platforms like Netflix, Audible, podcasts, broadcasters (EBU R128, ATSC A/85, radio and mobile, commercials) in one click. You can define a set of target parameters (integrated loudness, true peak level, dialog normalization, MaxLRA, MaxM, MaxS), like -16 LUFS for podcasts, and we will produce the audio accordingly.
In addition, they offer more control for multitrack productions and for the Adaptive Leveler.

We would like to give away free hours for new Auphonic users, to try out our free advanced algorithms. Please use this URL to register your new Auphonic account. the code is valid till end of March 2023 and will give you 5 extra production hours for the next month. Happy content creation!

More Audio Examples

There is no better way to experience Auphonic than hearing the difference our post production tool makes when applied to different types of audio and content.
We are happy to share that our new features page now contains some new audio examples you can listen to explore our web tool, and we will add even more examples in the next weeks.







algorithm

ETSI releases a Technical Report on autonomic network management and control applying machine learning and other AI algorithms

ETSI releases a Technical Report on autonomic network management and control applying machine learning and other AI algorithms

Sophia Antipolis, 5 March 2020

The ETSI Technical Committee on Core Network and Interoperability Testing (TC INT) has just released a Technical Report, ETSI TR 103 626, providing a mapping of architectural components for autonomic networking, cognitive networking and self-management. This architecture will serve the self-managing Future Internet.

The ETSI TR 103 626 provides a mapping of architectural components developed in the European Commission (EC) WiSHFUL and ORCA Projects, using the ETSI Generic Autonomic Networking Architecture (GANA) model.

The objective is to illustrate how the ETSI GANA model specified in the ETSI specification TS 103 195-2 can be implemented when using the components developed in these two projects. The Report also shows how the WiSHFUL architecture augmented with virtualization and hardware acceleration techniques can implement the GANA model. This will guide implementers of autonomics components for autonomic networks in their optimization of their GANA implementations.

The TR addresses autonomic decision-making and associated control-loops in wireless network architectures and their associated management and control architectures. The mapping of the architecture also illustrates how to implement self-management functionality in the GANA model for wireless networks, taking into consideration another Report ETSI TR 103 495, where GANA cognitive algorithms for autonomics, such as machine learning and other AI algorithms, can be applied.




algorithm

ETSI Secures Critical Infrastructures against Cyber Quantum Attacks with new TETRA Algorithms

ETSI Secures Critical Infrastructures against Cyber Quantum Attacks with new TETRA Algorithms

Sophia Antipolis, 8 November 2022

With the world facing growing challenges including the war in Europe and a global energy crisis, it is essential that the mission- and business-critical communications networks used by the public safety, critical infrastructure and utilities sectors (including transportation, electricity, natural gas and water plants) are secured against third-party attacks, to protect communications and sensitive data. With more than 120 countries using dedicated TETRA (Terrestrial Trunked Radio) networks for these critical services, work has been undertaken to ensure the ETSI TETRA technology standard remains robust in the face of evolving threats.

Read More...




algorithm

ETSI and TCCA Statement to TETRA Security Algorithms Research Findings Publication on 24 July 2023

Sophia Antipolis, 24 July 2023

The European Telecommunications Standards Institute (ETSI) and The Critical Communications Association (TCCA) are the proud authorities and custodians of the ETSI TETRA (Terrestrial Trunked Radio) technology standard, one of the world’s most secure and reliable radio communications standards.

Read More...




algorithm

ETSI Releases TETRA Algorithms to Public Domain, maintaining the highest security for its critical communication standard

Sophia Antipolis, 14 November 2023

ETSI is happy to announce that at a meeting in October of its technical committee in charge of the TETRA standard (TCCE), a full consensus was reached to make the primitives of all TETRA Air Interface cryptographic algorithms available to the public domain.

Read More...




algorithm

Yeast Against the Machine: Bakers’ Yeast Could Improve Diagnosis - How our billion-year-old cousin, baker’s yeast, can reveal — more reliably than leading algorithms — whether a genetic mutation is actually harmful.

How our billion-year-old cousin, baker’s yeast, can reveal — more reliably than leading algorithms — whether a genetic mutation is actually harmful.Toronto, ON – It’s easier than ever to sequence our DNA, but doctors still can’t exactly tell from our genomes which diseases might befall us. Professor Fritz Roth is setting out to change this by […]




algorithm

Combining X-ray Fluorescence, Infrared Spectroscopy and Software Algorithms for Positive Material and Contaminant Identification

FTIR is the primary method for material and contaminant identification but lacks sensitivity to metallic components. X-ray fluorescence (XRF) can fill this gap and improve identification accuracy.




algorithm

Episode 434: Steven Skiena on Preparing for the Data Structures and Algorithm Job Interview

Steven Skiena speaks with SE Radio’s Adam Conrad about practical applications for data structures and algorithms, as well as take-aways on how to best study Skiena’s book when prepping for the technical interview process.




algorithm

PSTR-PXNR - No-reference pixel-based video quality estimation algorithm

PSTR-PXNR - No-reference pixel-based video quality estimation algorithm




algorithm

[ F.743.22 (12/22) ] - Requirements and architecture for intelligent video surveillance algorithm-training systems

Requirements and architecture for intelligent video surveillance algorithm-training systems




algorithm

Show HN: Stretch My Time Off – An Algorithm to Optimize Your Vacation Days

Comments





algorithm

Quantum Algorithm Solves Travelling Salesperson Problem With 1-Qubit

Quantum physicists have developed an algorithm that uses a single qubit to solve a problem that had previously needed thousands of them.




algorithm

Unsupervised Learning Algorithms

Location: Electronic Resource- 




algorithm

Tools and Algorithms for the Construction and Analysis of Systems 22nd International Conference, TACAS 2016, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2016, Eindhoven, The Netherlands, April 2-8, 2016, Procee

Location: Electronic Resource- 




algorithm

Mobile cloud computing : architectures, algorithms and applications

Location: Engineering Library- QA76.585.D425 2016




algorithm

Extremal optimization : fundamentals, algorithms, and applications

Location: Engineering Library- T57.L82 2015




algorithm

Algorithm by The Great Disappointment

https://youtu.be/iTw4VJBx5BU Above is a lyric video we put together last week. This is a song from an album I recorded this summer that was supposed to be a solo project but then turned into a band.




algorithm

291: ‘Algorithms, How Do They Work?’, With Nilay Patel

Nilay Patel returns to the show to discuss this week’s House antitrust hearing featuring testimony from Tim Cook, Jeff Bezos, Sundar Pichai, and Mark Zuckerberg.




algorithm

Algorithmics, la escuela de programación para niños




algorithm

Managing Algorithmic Volatility

Upon the recently announced Google update I've seen some people Tweet things like

  • if you are afraid of algorithm updates, you must be a crappy SEO
  • if you are technically perfect in your SEO, updates will only help you

I read those sorts of lines and cringe.

Here's why...

Fragility

Different businesses, business models, and business structures have varying degrees of fragility.

If your business is almost entirely based on serving clients then no matter what you do there is going to be a diverse range of outcomes for clients on any major update.

Let's say 40% of your clients are utterly unaffected by an update & of those who saw any noticeable impact there was a 2:1 ratio in your favor, with twice as many clients improving as falling.

Is that a good update? Does that work well for you?

If you do nothing other than client services as your entire business model, then that update will likely suck for you even though the net client impact was positive.

Why?

Many businesses are hurting after the Covid-19 crisis. Entire categories have been gutted & many people are looking for any reason possible to pull back on budget. Some of the clients who won big on the update might end up cutting their SEO budget figuring they had already won big and that problem was already sorted.

Some of the clients that fell hard are also likely to either cut their budget or call endlessly asking for updates and stressing the hell out of your team.

Capacity Utilization Impacts Profit Margins

Your capacity utilization depends on how high you can keep your steady state load relative to what your load looks like at peaks. When there are big updates management or founders can decide to work double shifts and do other things to temporarily deal with increased loads at the peak, but that can still be stressful as hell & eat away at your mental and physical health as sleep and exercise are curtailed while diet gets worse. The stress can be immense if clients want results almost immediately & the next big algorithm update which reflects your current work may not happen for another quarter year.

How many clients want to be told that their investments went sour but the problem was they needed to double their investment while cashflow is tight and wait a season or two while holding on to hope?

Category-based Fragility

Businesses which appear to be diversified often are not.

  • Everything in hospitality was clipped by Covid-19.
  • 40% of small businesses across the United States have stopped making rent payments.
  • When restaurants massively close that's going to hit Yelp's business hard.
  • Auto sales are off sharply.

Likewise there can be other commonalities in sites which get hit during an update. Not only could it include business category, but it could also be business size, promotional strategies, etc.

Sustained profits either come from brand strength, creative differentiation, or systemization. Many prospective clients do not have the budget to build a strong brand nor the willingness to create something that is truly differentiated. That leaves systemization. Systemization can leave footprints which act as statistical outliers that can be easily neutralized.

Sharp changes can happen at any point in time.

For years Google was funding absolute garbage like Mahalo autogenerated spam and eHow with each month being a new record. It is very hard to say "we are doing it wrong" or "we need to change everything" when it works month after month after month.

Then an update happens and poof.

  • Was eHow decent back in the first Internet bubble? Sure. But it lost money.
  • Was it decent after it got bought out for a song and had the paywall dropped in favor of using the new Google AdSense program? Sure.
  • Was it decent the day Demand Media acquired it? Sure.
  • Was it decent on the day of the Demand Media IPO? Almost certainly not. But there was a lag between that day and getting penalized.

Panda Trivia

The first Panda update missed eHow because journalists were so outraged by the narrative associated with the pump-n-dump IPO. They feared their jobs going away and being displaced by that low level garbage, particularly as the market cap of Demand Media eclipsed the New York Times.

Journalist coverage of the pump-n-dump IPO added credence to it from an algorithmic perspective. By constantly writing hate about eHow they made eHow look like a popular brand, generating algorithmic signals that carried the site until Google created an extension which allowed journalists and other webmasters to vote against the site they had been voting for through all their outrage coverage.

Algorithms & the Very Visible Hand

And all algorithmic channels like organic search, the Facebook news feed, or Amazon's product pages go through large shifts across time. If they don't, they get gamed, repetitive, and lose relevance as consumer tastes change and upstarts like Tiktok emerge.

Consolidation by the Attention Merchants

Frequent product updates, cloning of upstarts, or outright acquisitions are required to maintain control of distribution:

"The startups of the Rebellion benefited tremendously from 2009 to 2012. But from 2013 on, the spoils of smartphone growth went to an entirely different group: the Empire. ... A network effect to engage your users, AND preferred distribution channels to grow, AND the best resources to build products? Oh my! It’s no wonder why the Empire has captured so much smartphone value and created a dark time for the Rebellion. ... Now startups are fighting for only 5% of the top spots as the Top Free Apps list is dominated by incumbents. Facebook (4 apps), Google (6 apps), and Amazon (4 apps) EACH have as many apps in the Top 100 list as all the new startups combined."

Apple & Amazon

Emojis are popular, so those features got copied, those apps got blocked & then apps using the official emojis also got blocked from distribution. The same thing happens with products on Amazon.com in terms of getting undercut by a house brand which was funded by using the vendor's sales data. Re-buy your brand or else.

Facebook

Before the Facebook IPO some thought buying Zynga shares was a backdoor way to invest into Facebook because gaming was such a large part of the ecosystem. That turned out to be a dumb thesis and horrible trade. At times other things trended including quizzes, videos, live videos, news, self hosted Instant Articles, etc.

Over time the general trend was edge rank of professional publishers fell as a greater share of inventory went to content from friends & advertisers. The metrics associated with the ads often overstated their contribution to sales due to bogus math and selection bias.

Internet-first publishers like CollegeHumor struggled to keep up with the changes & influencers waiting for a Facebook deal had to monetize using third parties:

“I did 1.8 billion views last year,” [Ryan Hamilton] said. “I made no money from Facebook. Not even a dollar.” ... "While waiting for Facebook to invite them into a revenue-sharing program, some influencers struck deals with viral publishers such as Diply and LittleThings, which paid the creators to share links on their pages. Those publishers paid top influencers around $500 per link, often with multiple links being posted per day, according to a person who reached such deals."

YouTube

YouTube had a Panda-like update back in 2012 to favor watch time over raw view counts. They also adjust the ranking algorithms on breaking news topics to favor large & trusted channels over conspiracy theorist content, alternative health advice, hate speech & ridiculous memes like the Tide pod challenge.

All unproven channels need to start somewhat open to gain usage, feedback & marketshare. Once they become real businesses they clamp down. Some of the clamp down can be editorial, forced by regulators, or simply anticompetitive monpolistic abuse.

Kid videos were a huge area on YouTube (perhaps still are) but that area got cleaned up after autogenerated junk videos were covered & the FTC clipped YouTube for delivering targeted ads on channels which primarily catered to children.

Dominant channels can enforce tying & bundling to wipe out competitors:

"Google’s response to the threat from AppNexus was that of a classic monopolist. They announced that YouTube would no longer allow third-party advertising technology. This was a devastating move for AppNexus and other independent ad technology companies. YouTube was (and is) the largest ad-supported video publisher, with more than 50% market share in most major markets. ... Over the next few months, Google’s ad technology team went to each of our clients and told them that, regardless of how much they liked working with AppNexus, they would have to also use Google’s ad technology products to continue buying YouTube. This is the definition of bundling, and we had no recourse. Even WPP, our largest customer and largest investors, had no choice but to start using Google’s technology. AppNexus growth slowed, and we were forced to lay off 100 employees in 2016."

Everyone Else

Every moderately large platform like eBay, Etsy, Zillow, TripAdvisor or the above sorts of companies runs into these sorts of issues with changing distribution & how they charge for distribution.

Building Anti-fragility Into Your Business Model

Growing as fast as you can until the economy craters or an algorithm clips you almost guarantees a hard fall along with an inability to deal with it.

Markets ebb and flow. And that would be true even if the above algorithmic platforms did not make large, sudden shifts.

Build Optionality Into Your Business Model

If your business primarily relies on publishing your own websites or you have a mix of a few clients and your own sites then you have a bit more optionality to your approach in dealing with updates.

Even if you only have one site and your business goes to crap maybe you at least temporarily take on a few more consulting clients or do other gig work to make ends meet.

Focus on What is Working

If you have a number of websites you can pour more resources into whatever sites reacted positively to the update while (at least temporarily) ignoring any site that was burned to a crisp.

Ignore the Dead Projects

The holding cost of many websites is close to zero unless they use proprietary and complex content management systems. Waiting out a penalty until you run out of obvious improvements on your winning sites is not a bad strategy. Plus, if you think the burned site is going to be perpetually burned to a crisp (alternative health anyone?) then you could sell links off it or generate other alternative revenue streams not directly reliant on search rankings.

Build a Cushion

If you have cash savings maybe you guy out and buy some websites or domain names from other people who are scared of the volatility or got clipped for issues you think you could easily fix.

When the tide goes out debt leverage limits your optionality. Savings gives you optionality. Having slack in your schedule also gives you optionality.

The person with a lot of experience & savings would love to see highly volatile search markets because those will wash out some of the competition, curtail investments from existing players, and make other potential competitors more hesitant to enter the market.

Categories: 




algorithm

How to Read Google Algorithm Updates

Links = Rank

Old Google (pre-Panda) was to some degree largely the following: links = rank.

Once you had enough links to a site you could literally pour content into a site like water and have the domain's aggregate link authority help anything on that site rank well quickly.

As much as PageRank was hyped & important, having a diverse range of linking domains and keyword-focused anchor text were important.

Brand = Rank

After Vince then Panda a site's brand awareness (or, rather, ranking signals that might best simulate it) were folded into the ability to rank well.

Panda considered factors beyond links & when it first rolled out it would clip anything on a particular domain or subdomain. Some sites like HubPages shifted their content into subdomains by users. And some aggressive spammers would rotate their entire site onto different subdomains repeatedly each time a Panda update happened. That allowed those sites to immediately recover from the first couple Panda updates, but eventually Google closed off that loophole.

Any signal which gets relied on eventually gets abused intentionally or unintentionally. And over time it leads to a "sameness" of the result set unless other signals are used:

Google is absolute garbage for searching anything related to a product. If I'm trying to learn something invariably I am required to search another source like Reddit through Google. For example, I became introduced to the concept of weighted blankets and was intrigued. So I Google "why use a weighted blanket" and "weighted blanket benefits". Just by virtue of the word "weighted blanket" being in the search I got pages and pages of nothing but ads trying to sell them, and zero meaningful discourse on why I would use one

Getting More Granular

Over time as Google got more refined with Panda broad-based sites outside of the news vertical often fell on tough times unless they were dedicated to some specific media format or had a lot of user engagement metrics like a strong social network site. That is a big part of why the New York Times sold About.com for less than they paid for it & after IAC bought it they broke it down into a variety of sites like: Verywell (health), the Spruce (home decor), the Balance (personal finance), Lifewire (technology), Tripsavvy (travel) and ThoughtCo (education & self-improvement).

Penguin further clipped aggressive anchor text built on low quality links. When the Penguin update rolled out Google also rolled out an on-page spam classifier to further obfuscate the update. And the Penguin update was sandwiched by Panda updates on either side, making it hard for people to reverse engineer any signal out of weekly winners and losers lists from services that aggregate massive amounts of keyword rank tracking data.

So much of the link graph has been decimated that Google reversed their stance on nofollow to where in March 1st of this year they started treating it as a hint versus a directive for ranking purposes. Many mainstream media websites were overusing nofollow or not citing sources at all, so this additional layer of obfuscation on Google's part will allow them to find more signal in that noise.

May 4, 2020 Algo Update

On May 4th Google rolled out another major core update.

I saw some sites which had their rankings suppressed for years see a big jump. But many things changed at once.

Wedge Issues

On some political search queries which were primarily classified as being news related Google is trying to limit political blowback by showing official sites and data scraped from official sites instead of putting news front & center.

"Google’s pretty much made it explicit that they’re not going to propagate news sites when it comes to election related queries and you scroll and you get a giant election widget in your phone and it shows you all the different data on the primary results and then you go down, you find Wikipedia, you find other like historical references, and before you even get to a single news article, it’s pretty crazy how Google’s changed the way that the SERP is intended."

That change reflects the permanent change to the news media ecosystem brought on by the web.

YMYL

A blog post by Lily Ray from Path Interactive used Sistrix data to show many of the sites which saw high volatility were in the healthcare vertical & other your money, your life (YMYL) categories.

Aggressive Monetization

One of the more interesting pieces of feedback on the update was from Rank Ranger, where they looked at particular pages that jumped or fell hard on the update. They noticed sites that put ads or ad-like content front and center may have seen sharp falls on some of those big money pages which were aggressively monetized:

Seeing this all but cements the notion (in my mind at least) that Google did not want content unrelated to the main purpose of the page to appear above the fold to the exclusion of the page's main content! Now for the second wrinkle in my theory.... A lot of the pages being swapped out for new ones did not use the above-indicated format where a series of "navigation boxes" dominated the page above the fold.

The above shift had a big impact on some sites which are worth serious money. Intuit paid over $7 billion to acquire Credit Karma, but their credit card affiliate pages recently slid hard.

The above sort of shift reflects Google getting more granular with their algorithms. Early Panda was all or nothing. Then it started to have different levels of impact throughout different portions of a site.

Brand was sort of a band aid or a rising tide that lifted all (branded) boats. Now we are seeing Google get more granular with their algorithms where a strong brand might not be enough if they view the monetization as being excessive. That same focus on page layout can have a more adverse impact on small niche websites.

One of my old legacy clients had a site which was primarily monetized by the Amazon affiliate program. About a month ago Amazon chopped affiliate commissions in half & then the aggressive ad placement caused search traffic to the site to get chopped in half when rankings slid on this update.

Their site has been trending down over the past couple years largely due to neglect as it was always a small side project. They recently improved some of the content about a month or so ago and that ended up leading to a bit of a boost, but then this update came. As long as that ad placement doesn't change the declines are likely to continue.

They just recently removed that ad unit, but that meant another drop in income as until there is another big algo update they're likely to stay at around half search traffic. So now they have a half of a half of a half. Good thing the site did not have any full time employees or they'd be among the millions of newly unemployed. That experience though really reflects how websites can be almost like debt levered companies in terms of going under virtually overnight. Who can have revenue slide around 88% and then take increase investment in the property using the remaining 12% while they wait for the site to be rescored for a quarter year or more?

"If you have been negatively impacted by a core update, you (mostly) cannot see recovery from that until another core update. In addition, you will only see recovery if you significantly improve the site over the long-term. If you haven’t done enough to improve the site overall, you might have to wait several updates to see an increase as you keep improving the site. And since core updates are typically separated by 3-4 months, that means you might need to wait a while."

Almost nobody can afford to do that unless the site is just a side project.

Google could choose to run major updates more frequently, allowing sites to recover more quickly, but they gain economic benefit in defunding SEO investments & adding opportunity cost to aggressive SEO strategies by ensuring ranking declines on major updates last a season or more.

Choosing a Strategy vs Letting Things Come at You

They probably should have lowered their ad density when they did those other upgrades. If they had they likely would have seen rankings at worst flat or likely up as some other competing sites fell. Instead they are rolling with a half of a half of a half on the revenue front. Glenn Gabe preaches the importance of fixing all the problems you can find rather than just fixing one or two things and hoping it is enough. If you have a site which is on the edge you sort of have to consider the trade offs between various approaches to monetization.

  • monetize it lightly and hope the site does well for many years
  • monetize it slightly aggressively while using the extra income to further improve the site elsewhere and ensure you have enough to get by any lean months
  • aggressively monetize the shortly after a major ranking update if it was previously lightly monetized & then hope to sell it off a month or two later before the next major algorithm update clips it again

Outcomes will depend partly on timing and luck, but consciously choosing a strategy is likely to yield better returns than doing a bit of mix-n-match while having your head buried in the sand.

Reading the Algo Updates

You can spend 50 or 100 hours reading blog posts about the update and learn precisely nothing in the process if you do not know which authors are bullshitting and which authors are writing about the correct signals.

But how do you know who knows what they are talking about?

It is more than a bit tricky as the people who know the most often do not have any economic advantage in writing specifics about the update. If you primarily monetize your own websites, then the ignorance of the broader market is a big part of your competitive advantage.

Making things even trickier, the less you know the more likely Google would be to trust you with sending official messaging through you. If you syndicate their messaging without questioning it, you get a treat - more exclusives. If you question their messaging in a way that undermines their goals, you'd quickly become persona non grata - something cNet learned many years ago when they published Eric Schmidt's address.

It would be unlikely you'd see the following sort of Tweet from say Blue Hat SEO or Fantomaster or such.

To be able to read the algorithms well you have to have some market sectors and keyword groups you know well. Passively collecting an archive of historical data makes the big changes stand out quickly.

Everyone who depends on SEO to make a living should subscribe to an online rank tracking service or run something like Serposcope locally to track at least a dozen or two dozen keywords. If you track rankings locally it makes sense to use a set of web proxies and run the queries slowly through each so you don't get blocked.

You should track at least a diverse range to get a true sense of the algorithmic changes.

  • a couple different industries
  • a couple different geographic markets (or at least some local-intent vs national-intent terms within a country)
  • some head, midtail and longtail keywords
  • sites of different size, age & brand awareness within a particular market

Some tools make it easy to quickly add or remove graphing of anything which moved big and is in the top 50 or 100 results, which can help you quickly find outliers. And some tools also make it easy to compare their rankings over time. As updates develop you'll often see multiple sites making big moves at the same time & if you know a lot about the keyword, the market & the sites you can get a good idea of what might have been likely to change to cause those shifts.

Once you see someone mention outliers most people miss that align with what you see in a data set, your level of confidence increases and you can spend more time trying to unravel what signals changed.

I've read influential industry writers mention that links were heavily discounted on this update. I have also read Tweets like this one which could potentially indicate the opposite.

If I had little to no data, I wouldn't be able to get any signal out of that range of opinions. I'd sort of be stuck at "who knows."

By having my own data I track I can quickly figure out which message is more inline with what I saw in my subset of data & form a more solid hypothesis.

No Single Smoking Gun

As Glenn Gabe is fond of saying, sites that tank usually have multiple major issues.

Google rolls out major updates infrequently enough that they can sandwich a couple different aspects into major updates at the same time in order to make it harder to reverse engineer updates. So it does help to read widely with an open mind and imagine what signal shifts could cause the sorts of ranking shifts you are seeing.

Sometimes site level data is more than enough to figure out what changed, but as the above Credit Karma example showed sometimes you need to get far more granular and look at page-level data to form a solid hypothesis.

As the World Changes, the Web Also Changes

About 15 years ago online dating was seen as a weird niche for recluses who perhaps typically repulsed real people in person. Now there are all sorts of niche specialty dating sites including a variety of DTF type apps. What was once weird & absurd had over time become normal.

The COVID-19 scare is going to cause lasting shifts in consumer behavior that accelerate the movement of commerce online. A decade of change will happen in a year or two across many markets.

Telemedicine will grow quickly. Facebook is adding commerce featured directly onto their platform through partnering with Shopify. Spotify is spending big money to buy exclusives rights to distribute widely followed podcasters like Joe Rogan. Uber recently offered to acquire GrubHub. Google and Apple will continue adding financing features to their mobile devices. Movie theaters have lost much of their appeal.

Tons of offline "value" businesses ended up having no value after months of revenue disappearing while large outstanding debts accumulated interest. There is a belief that some of those brands will have strong latent brand value that carries over online, but if they were weak even when the offline stores acting like interactive billboards subsidized consumer awareness of their brands then as those stores close the consumer awareness & loyalty from in-person interactions will also dry up. A shell of a company rebuilt around the Toys R' Us brand is unlikely to beat out Amazon's parallel offering or a company which still runs stores offline.

Big box retailers like Target & Walmart are growing their online sales at hundreds of percent year over year.

There will be waves of bankruptcies, dramatic shifts in commercial real estate prices (already reflected in plunging REIT prices), and more people working remotely (shifting residential real estate demand from the urban core back out into suburbs).

People who work remote are easier to hire and easier to fire. Those who keep leveling up their skills will eventually get rewarded while those who don't will rotate jobs every year or two. The lack of stability will increase demand for education, though much of that incremental demand will be around new technologies and specific sectors - certificates or informal training programs instead of degrees.

More and more activities will become normal online activities.

The University of California has about a half-million students & in the fall semester they are going to try to have most of those classes happen online. How much usage data does Google gain as thousands of institutions put more and more of their infrastructure and service online?

A lot of B & C level schools are going to go under as the like-vs-like comparison gets easier. Back when I ran a membership site here a college paid us to have students gain access to our membership area of the site. As online education gets normalized many unofficial trade-related sites will look more economically attractive on a relative basis.

If core institutions of the state deliver most of their services online, then other companies can be expected to follow. When big cities publish lists of crimes they will not respond to during economic downturns they are effectively subsidizing more crime. That in turn makes moving to somewhere a bit more rural & cheaper make sense, particularly when you no longer need to live near your employer.

Categories: 




algorithm

How Chordcat Works, A Chord Naming Algorithm | Shriram's Blog




algorithm

Algorithms we develop software by

Start working on the feature at the beginning of the day. If you don't finish by the end of the day, delete it all and start over the next day. You're allowed to keep unit tests you wrote.




algorithm

The CVM Algorithm • Buttondown




algorithm

The Paragon Algorithm, a Next Generation Search Engine That Uses Sequence Temperature Values and Feature Probabilities to Identify Peptides from Tandem Mass Spectra

Ignat V. Shilov
Sep 1, 2007; 6:1638-1655
Technology




algorithm

Convergence, finiteness and periodicity of several new algorithms of ????-adic continued fractions

Zhaonan Wang and Yingpu Deng
Math. Comp. 93 (), 2921-2942.
Abstract, references and article information




algorithm

Rage Against the Algorithm: the Risks of Overestimating Military Artificial Intelligence

27 August 2020

Yasmin Afina

Research Assistant, International Security Programme
Increasing dependency on artificial intelligence (AI) for military technologies is inevitable and efforts to develop these technologies to use in the battlefield is proceeding apace, however, developers and end-users must ensure the reliability of these technologies, writes Yasmin Afina.

GettyImages-112897149.jpg

F-16 SimuSphere HD flight simulator at Link Simulation in Arlington, Texas, US. Photo: Getty Images.

AI holds the potential to replace humans for tactical tasks in military operations beyond current applications such as navigation assistance. For example, in the US, the Defense Advanced Research Projects Agency (DARPA) recently held the final round of its AlphaDogfight Trials where an algorithm controlling a simulated F-16 fighter was pitted against an Air Force pilot in virtual aerial combat. The algorithm won by 5-0. So what does this mean for the future of military operations?

The agency’s deputy director remarked that these tools are now ‘ready for weapons systems designers to be in the toolbox’. At first glance, the dogfight shows that an AI-enabled air combat would provide tremendous military advantage including the lack of survival instincts inherent to humans, the ability to consistently operate with high acceleration stress beyond the limitations of the human body and high targeting precision.

The outcome of these trials, however, does not mean that this technology is ready for deployment in the battlefield. In fact, an array of considerations must be taken into account prior to their deployment and use – namely the ability to adapt in real-life combat situations, physical limitations and legal compliance.

Testing environment versus real-life applications

First, as with all technologies, the performance of an algorithm in its testing environment is bound to differ from real-life applications such as in the case of cluster munitions. For instance, Google Health developed an algorithm to help with diabetic retinopathy screening. While the algorithm’s accuracy rate in the lab was over 90 per cent, it did not perform well out of the lab because the algorithm was used to high-quality scans in its training, it rejected more than a fifth of the real-life scans which were deemed as being below the quality threshold required. As a result, the process ended up being as time-consuming and costly – if not more so – than traditional screening.

Similarly, virtual environments akin to the AlphaDogfight Trials do not reflect the extent of risks, hazards and unpredictability of real-life combat. In the dogfight exercise, for example, the algorithm had full situational awareness and was repeatedly trained to the rules, parameters and limitations of its operating environment. But, in a real-life dynamic and battlefield, the list of variables is long and will inevitably fluctuate: visibility may be poor, extreme weather could affect operations and the performance of aircraft and the behaviour and actions of adversaries will be unpredictable.

Every single eventuality would need to be programmed in line with the commander’s intent in an ever-changing situation or it would drastically affect the performance of algorithms including in target identification and firing precision.

Hardware limitations

Another consideration relates to the limitations of the hardware that AI systems depend on. Algorithms depend on hardware to operate equipment such as sensors and computer systems – each of which are constrained by physical limitations. These can be targeted by an adversary, for example, through electronic interference to disrupt the functioning of the computer systems which the algorithms are operating from.

Hardware may also be affected involuntarily. For instance, a ‘pilotless’ aircraft controlled by an algorithm can indeed undergo higher accelerations, and thus, higher g-force than the human body can endure. However, the aircraft in itself is also subject to physical limitations such as acceleration limits beyond which parts of the aircraft, such as its sensors, may be severely damaged which in turn affects the algorithm’s performance and, ultimately, mission success. It is critical that these physical limitations are factored into the equation when deploying these machines especially when they so heavily rely on sensors.

Legal compliance

Another major, and perhaps the greatest, consideration relates to the ability to rely on machines for legal compliance. The DARPA dogfight exclusively focused on the algorithm’s ability to successfully control the aircraft and counter the adversary, however, nothing indicates its ability to ensure that strikes remain within the boundaries of the law.

In an armed conflict, the deployment and use of such systems in the battlefield are not exempt from international humanitarian law (IHL) and most notably its customary principles of distinction, proportionality and precautions in attack. It would need to be able to differentiate between civilians, combatants and military objectives, calculate whether its attacks will be proportionate against the set military objective and live collateral damage estimates and take the necessary precautions to ensure the attacks remain within the boundaries of the law – including the ability to abort if necessary. This would also require the machine to have the ability to stay within the rules of engagement for that particular operation.

It is therefore critical to incorporate IHL considerations from the conception and throughout the development and testing phases of algorithms to ensure the machines are sufficiently reliable for legal compliance purposes.

It is also important that developers address the 'black box' issue whereby the algorithm’s calculations are so complex that it is impossible for humans to understand how it came to its results. It is not only necessary to address the algorithm’s opacity to improve the algorithm’s performance over time, it is also key for accountability and investigation purposes in cases of incidents and suspected violations of applicable laws.

Reliability, testing and experimentation

Algorithms are becoming increasingly powerful and there is no doubt that they will confer tremendous advantages to the military. Over-hype, however, must be avoided at the expense of the machine’s reliability on the technical front as well as for legal compliance purposes.

The testing and experimentation phases are key during which developers will have the ability to fine-tune the algorithms. Developers must, therefore, be held accountable for ensuring the reliability of machines by incorporating considerations pertaining to performance and accuracy, hardware limitations as well as legal compliance. This could help prevent incidents in real life that result from overestimating of the capabilities of AI in military operations. 




algorithm

Quantum Algorithms Institute Drives Predictive Model Accuracy with Quantum Collaboration

SURREY, British Columbia, Nov. 12, 2024 — Today, the Quantum Algorithms Institute (QAI) announced a partnership with Canadian companies, AbaQus and InvestDEFY Technologies, to solve common challenges in training machine learning […]

The post Quantum Algorithms Institute Drives Predictive Model Accuracy with Quantum Collaboration appeared first on HPCwire.




algorithm

An Algorithm Predicts the Images in a Dream

A learning simulation, combined with fMRI readings, is able to predict the visualizations seen by a dreamer in real time




algorithm

NY Passes Two Kids Privacy Bills to Restrict Access to Addictive Algorithmic Feeds

The New York legislature passed two bills on June 7, 2024 directed at children’s use of online technologies – the Stop Addictive Feeds Exploitation (SAFE) for Kids Act (S7694) that restricts access to addictive algorithmic feeds and the New York Child Data Protection Act (S7695) that bans sites from collecting, using, sharing or selling personal […]




algorithm

New Algorithm Predicts Familial High Cholesterol Levels

Researchers at the Queen Mary University of London developed and tested an algorithm that detects familial hypercholesterolemia. They named the algorithm




algorithm

Investigation: How TikTok's Algorithm Figures Out Your Deepest Desires

A Wall Street Journal investigation found that TikTok only needs one important piece of information to figure out what you want: the amount of time you linger over a piece of content. Every second you hesitate or rewatch, the app is tracking you. Photo illustration: Laura Kammermann/The Wall Street Journal




algorithm

Advanced algorithm for step detection in single-entity electrochemistry: a comparative study of wavelet transforms and convolutional neural networks

Faraday Discuss., 2024, Advance Article
DOI: 10.1039/D4FD00130C, Paper
Open Access
  This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.
Ziwen Zhao, Arunava Naha, Nikolaos Kostopoulos, Alina Sekretareva
In this study, two approaches for step detection in single-entity electrochemistry data are developed and compared: discrete wavelet transforms and convolutional neural networks.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




algorithm

Re-evaluating retrosynthesis algorithms with Syntheseus

Faraday Discuss., 2024, Advance Article
DOI: 10.1039/D4FD00093E, Paper
Krzysztof Maziarz, Austin Tripp, Guoqing Liu, Megan Stanley, Shufang Xie, Piotr Gaiński, Philipp Seidl, Marwin H. S. Segler
Syntheseus provides reference models and search algorithms as well as metrics to evaluate and improve synthesis planning tools.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




algorithm

A microfluidic microalgae detection system for cellular physiological response based on object detection algorithm

Lab Chip, 2024, Accepted Manuscript
DOI: 10.1039/D3LC00941F, Paper
Shizheng Zhou, Tianhui Chen, Edgar S. Fu, Liuyong Shi, Teng Zhou, Hong YAN
The composition of species and the physiological status of microalgal cells serve as significant indicators for monitoring marine environments. Symbiotic with corals, Symbiodiniacea are more sensitive to the environmental response....
The content of this RSS Feed (c) The Royal Society of Chemistry




algorithm

An improved cancer diagnosis algorithm for protein mass spectrometry based on PCA and a one-dimensional neural network combining ResNet and SENet

Analyst, 2024, Advance Article
DOI: 10.1039/D4AN00784K, Paper
Liang Ma, Wenqing Gao, Xiangyang Hu, Dongdong Zhou, Chenlu Wang, Jiancheng Yu, Keqi Tang
An improved cancer diagnosis algorithm for protein mass spectrometry based on PCA and 1D neural network combining ResNet and SENet is proposed and successfully applied to the diagnosis of ovarian cancer with high accuracy and strong fitting ability.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




algorithm

Algorithmic recommendations and human discretion [electronic resource] / Victoria Angelova, Will S. Dobbie, Crystal Yang

Cambridge, MA. : National Bureau of Economic Research, 2023




algorithm

Sandra Wachter: Exploring fairness, privacy and advertising in an algorithmic world

Sandra Wachter is a Lawyer, Associate Professor and Senior Research Fellow at University of Oxford. In this video, Sandra discusses how the law can keep up with new technology. Particularly, she spoke about her recent work on targeted advertising – a big issue for tech giants including Facebook and Amazon. The law already protects against discrimination based on certain identity traits such as race or gender. But targeted advertisers claim to group people according to “affinity” – an aggregate measure of their online behaviour – not identity. Wachter believes, however, that existing concepts in the law may have something to say about discrimination by affinity. ABOUT WIRED PULSE: AI AT THE BARBICAN 450 business executives, technologists and enthusiasts gathered at The Barbican Centre’s Concert Hall in London, for WIRED Pulse: AI at the Barbican on June 15, 2019. Discover some of the fascinating insights from speakers here: http://wired.uk/ai-event ABOUT WIRED PULSE AND WIRED EVENTS The WIRED Pulse series offers an engaging, top-level perspective on how disruptive technology and fast-changing industries - such as artificial intelligence, deep tech and health - are impacting the human experience. The aim is to distill the most pertinent strands of themes within each complex topic and to share it with the wider public as a thought-provoking conversation-starter. WIRED events shine a spotlight on the innovators, inventors and entrepreneurs who are changing our world for the better. Explore this channel for videos showing on-stage talks, behind-the-scenes action, exclusive interviews and performances from our roster of events. Join us as we uncover the most relevant, up-and-coming trends and meet the people building the future. ABOUT WIRED WIRED brings you the future as it happens - the people, the trends, the big ideas that will change our lives. An award-winning printed monthly and online publication. WIRED is an agenda-setting magazine offering brain food on a wide range of topics, from science, technology and business to pop-culture and politics. CONNECT WITH WIRED Events: http://wired.uk/events Web: http://po.st/WiredVideo Twitter: http://po.st/TwitterWired Facebook: http://po.st/FacebookWired Google+: http://po.st/GoogleWired Instagram: http://po.st/InstagramWired Magazine: http://po.st/MagazineWired Newsletter: http://po.st/NewslettersWired