algorithms

Subquadratic-Time Algorithms for Normal Bases. (arXiv:2005.03497v1 [cs.SC])

For any finite Galois field extension $mathsf{K}/mathsf{F}$, with Galois group $G = mathrm{Gal}(mathsf{K}/mathsf{F})$, there exists an element $alpha in mathsf{K}$ whose orbit $Gcdotalpha$ forms an $mathsf{F}$-basis of $mathsf{K}$. Such an $alpha$ is called a normal element and $Gcdotalpha$ is a normal basis. We introduce a probabilistic algorithm for testing whether a given $alpha in mathsf{K}$ is normal, when $G$ is either a finite abelian or a metacyclic group. The algorithm is based on the fact that deciding whether $alpha$ is normal can be reduced to deciding whether $sum_{g in G} g(alpha)g in mathsf{K}[G]$ is invertible; it requires a slightly subquadratic number of operations. Once we know that $alpha$ is normal, we show how to perform conversions between the working basis of $mathsf{K}/mathsf{F}$ and the normal basis with the same asymptotic cost.




algorithms

A Gentle Introduction to Quantum Computing Algorithms with Applications to Universal Prediction. (arXiv:2005.03137v1 [quant-ph])

In this technical report we give an elementary introduction to Quantum Computing for non-physicists. In this introduction we describe in detail some of the foundational Quantum Algorithms including: the Deutsch-Jozsa Algorithm, Shor's Algorithm, Grocer Search, and Quantum Counting Algorithm and briefly the Harrow-Lloyd Algorithm. Additionally we give an introduction to Solomonoff Induction, a theoretically optimal method for prediction. We then attempt to use Quantum computing to find better algorithms for the approximation of Solomonoff Induction. This is done by using techniques from other Quantum computing algorithms to achieve a speedup in computing the speed prior, which is an approximation of Solomonoff's prior, a key part of Solomonoff Induction. The major limiting factors are that the probabilities being computed are often so small that without a sufficient (often large) amount of trials, the error may be larger than the result. If a substantial speedup in the computation of an approximation of Solomonoff Induction can be achieved through quantum computing, then this can be applied to the field of intelligent agents as a key part of an approximation of the agent AIXI.




algorithms

Modular reactive distillation emulation elements integrated with instrumentation, control, and simulation algorithms

A method for creating laboratory-scale reactive distillation apparatus from provided modular components is described. At least two types of modular distillation column stages are provided. A first type of modular stage comprises two physical interfaces for connection with a respective physical interface of another modular stage. A second type modular stage comprises one such physical interface. At least one type of tray is provided for insertion into the first type of modular stage. A clamping arrangement is provided for joining together two modular stages at their respective physical interfaces for connection to form a joint. The invention provides for at least three modular stages can be joined. At least one sensor or sensor array can be inserted into each modular stage. At least one controllable element can be inserted into each modular stage. The invention provides for study of traditional, advanced, and photochemical types of reactive distillation.




algorithms

When the chips are down, thank goodness for software engineers: AI algorithms 'outpace Moore's law'

ML eggheads, devs get more bang for their buck, say OpenAI duo

Machine-learning algorithms are improving in performance at a rate faster than that of the underlying computer chips, we're told.…




algorithms

30 Weird Chess Algorithms: Elo World

OK! I did manage to finish the video I described in the last few posts. It's this:


30 Weird Chess Algorithms: Elo World


I felt pretty down on this video as I was finishing it, I think mostly in the same way that one does about their dissertation, just because of the slog. I started it just thinking, I'll make a quick fun video about all those chess topics, but then once I had set out to fill in the entire tournament table, this sort of dictated the flow of the video even if I wanted to just get it over with. So it was way longer than I was planning, at 42 minutes, and my stress about this just led to more tedium as I would micro-optimize in editing to shorten it. RIP some mediocre jokes. But it turns out there are plenty of people on the internet who enjoy long-form nerdy content like this, and it was well-received, which is encouraging. (But now I am perplexed that it seems to be more popular than NaN Gates and Flip-FLOPS, which IMO is far more intetersting/original. I guess the real lesson is just make what you feel like making, and post it!) The 50+ hours programming, drawing, recording and editing did have the desired effect of getting chess out of my system for now, at least.

Since last post I played Gato Roboto which is a straightforward and easy but still very charming "Metroidvania." Now I'm working my way through Deux Ex: Mankind Divided, which (aside from the crashing) is a a very solid sequel to Human Revolution. Although none of these games is likely to capture the magic of the original (one of my all-time faves), they do definitely have the property that you can play them in ways that the developer didn't explicitly set out for you, and as you know I get a big kick out of that.

Aside from the video games, I've picked back up a 10 year-old project that I never finished because it was a little bit outside my skillset. But having gotten significantly better at electronics and CNC, it is seeming pretty doable now. Stay tuned!




algorithms

Vsevolod Dyomkin: Dead-Tree Version of "Programming Algorithms"

I have finally obtained the first batch of the printed "Programming Algorithms" books and will shortly be sending them to the 13 people who asked for a hardcopy.

Here is a short video showing the book "in action":

If you also want to get a copy, here's how you do it:

  1. Send the money to my PayPal account: $30 if you want normal shipping or $35 if you want a tracking number. (The details on shipping are below).
  2. Shoot me an email to vseloved@gmail.com with your postal address.
  3. Once I see the donation, I'll go to the post office and send you the book.
  4. Optionaly step: if you want it to be signed, please, indicate it in your letter.
Shipping details: As I said originally, the price of the dead-tree version will be $20+shipping. I'll ship via the Ukrainian national post. You can do the fee calculation online here (book weight is 0.58 kg, size is 23 x 17 x 2 cm): https://calc.ukrposhta.ua/international-calculator. Alas, the interface is only in Ukrainian. According to the examples I've tried, the cost will be approximately $10-15. To make it easier, I've just settled on $10 shipping without a tracking number of $15 if you want a tracking number. Regardless of your country. I don't know how long it will take - probably depends on the location (I'll try to inquire when sending).

The book was already downloaded more than 1170 times (I'm not putting the exact number here as it's constantly growing little by little). I wish I knew how many people have, actually, read it in full or in part. I've also received some error corrections (special thanks goes to Serge Kruk), several small reviews and letters of encouragement. Those were very valuable and I hope to see more :)

Greetings from the far away city of Lima, Peru!
I loved this part: "Only losers don't comment their code, and comments will be used extensively"
Thank you so much for putting this comprehensive collection of highly important data structures, i'm already recommending this to two of my developers, which I hope i'll induce into my Lisp addiction.
--Flavio Egoavil

And here's another one:

Massively impressive book you've written! I've been a Lisp programmer for a long time and truly appreciate the work put in here. Making Lisp accessible for more people in relation to practical algorithms is very hard to do. But you truly made it. You'll definitely end up in the gallery of great and modern Lisp contributions like "Land of Lisp" and "Let Over Lambda". Totally agree with your path to focus on practical algorithmic thinking with Lisp and not messing it up with macros, oop and other advanced concepts.
--Lars Hård

Thanks guys, it's really appreciated!

If you feel the same or you've liked the book in some respect and have found it useful, please, continue to share news about it: that definitely helps attract more readers. And my main goal is to make it as widely read as possible...





algorithms

Hannah Fry to show strengths and weaknesses of algorithms

"Driverless cars, robot butlers and reusable rockets--if the big inventions of the past decade and the artificial intelligence developed to create them have taught us anything, it's that maths is undeniably cool. And if you’re still not convinced, chances are you’ve never had it explained to you via a live experiment with a pigeon before. Temporary pigeon handler and queen of making numbers fun is Dr Hannah Fry, the host of this year's annual Royal Institution Christmas Lectures." Learn more in "Christmas Lectures presenter Dr Hannah Fry on pigeons, AI and the awesome power of maths," by Rachael Pells, inews, December 23, 2019.




algorithms

Correction: Graph Algorithms for Condensing and Consolidating Gene Set Analysis Results. [Additions and Corrections]




algorithms

'Open Algorithms' Bill Would Jolt New York City Schools, Public Agencies

The proposed legislation would require the 1.1-million student district to publish the source code behind algorithms used to assign students to high schools, evaluate teachers, and more.




algorithms

Rage inside the machine : the prejudice of algorithms, and how to stop the internet making bigots of us all / Robert Elliott Smith.

Internet -- Social aspects.




algorithms

Sparse equisigned PCA: Algorithms and performance bounds in the noisy rank-1 setting

Arvind Prasadan, Raj Rao Nadakuditi, Debashis Paul.

Source: Electronic Journal of Statistics, Volume 14, Number 1, 345--385.

Abstract:
Singular value decomposition (SVD) based principal component analysis (PCA) breaks down in the high-dimensional and limited sample size regime below a certain critical eigen-SNR that depends on the dimensionality of the system and the number of samples. Below this critical eigen-SNR, the estimates returned by the SVD are asymptotically uncorrelated with the latent principal components. We consider a setting where the left singular vector of the underlying rank one signal matrix is assumed to be sparse and the right singular vector is assumed to be equisigned, that is, having either only nonnegative or only nonpositive entries. We consider six different algorithms for estimating the sparse principal component based on different statistical criteria and prove that by exploiting sparsity, we recover consistent estimates in the low eigen-SNR regime where the SVD fails. Our analysis reveals conditions under which a coordinate selection scheme based on a sum-type decision statistic outperforms schemes that utilize the $ell _{1}$ and $ell _{2}$ norm-based statistics. We derive lower bounds on the size of detectable coordinates of the principal left singular vector and utilize these lower bounds to derive lower bounds on the worst-case risk. Finally, we verify our findings with numerical simulations and a illustrate the performance with a video data where the interest is in identifying objects.




algorithms

Path-Based Spectral Clustering: Guarantees, Robustness to Outliers, and Fast Algorithms

We consider the problem of clustering with the longest-leg path distance (LLPD) metric, which is informative for elongated and irregularly shaped clusters. We prove finite-sample guarantees on the performance of clustering with respect to this metric when random samples are drawn from multiple intrinsically low-dimensional clusters in high-dimensional space, in the presence of a large number of high-dimensional outliers. By combining these results with spectral clustering with respect to LLPD, we provide conditions under which the Laplacian eigengap statistic correctly determines the number of clusters for a large class of data sets, and prove guarantees on the labeling accuracy of the proposed algorithm. Our methods are quite general and provide performance guarantees for spectral clustering with any ultrametric. We also introduce an efficient, easy to implement approximation algorithm for the LLPD based on a multiscale analysis of adjacency graphs, which allows for the runtime of LLPD spectral clustering to be quasilinear in the number of data points.




algorithms

Convergences of Regularized Algorithms and Stochastic Gradient Methods with Random Projections

We study the least-squares regression problem over a Hilbert space, covering nonparametric regression over a reproducing kernel Hilbert space as a special case. We first investigate regularized algorithms adapted to a projection operator on a closed subspace of the Hilbert space. We prove convergence results with respect to variants of norms, under a capacity assumption on the hypothesis space and a regularity condition on the target function. As a result, we obtain optimal rates for regularized algorithms with randomized sketches, provided that the sketch dimension is proportional to the effective dimension up to a logarithmic factor. As a byproduct, we obtain similar results for Nystr"{o}m regularized algorithms. Our results provide optimal, distribution-dependent rates that do not have any saturation effect for sketched/Nystr"{o}m regularized algorithms, considering both the attainable and non-attainable cases, in the well-conditioned regimes. We then study stochastic gradient methods with projection over the subspace, allowing multi-pass over the data and minibatches, and we derive similar optimal statistical convergence results.




algorithms

On the consistency of graph-based Bayesian semi-supervised learning and the scalability of sampling algorithms

This paper considers a Bayesian approach to graph-based semi-supervised learning. We show that if the graph parameters are suitably scaled, the graph-posteriors converge to a continuum limit as the size of the unlabeled data set grows. This consistency result has profound algorithmic implications: we prove that when consistency holds, carefully designed Markov chain Monte Carlo algorithms have a uniform spectral gap, independent of the number of unlabeled inputs. Numerical experiments illustrate and complement the theory.




algorithms

Scalable Approximate MCMC Algorithms for the Horseshoe Prior

The horseshoe prior is frequently employed in Bayesian analysis of high-dimensional models, and has been shown to achieve minimax optimal risk properties when the truth is sparse. While optimization-based algorithms for the extremely popular Lasso and elastic net procedures can scale to dimension in the hundreds of thousands, algorithms for the horseshoe that use Markov chain Monte Carlo (MCMC) for computation are limited to problems an order of magnitude smaller. This is due to high computational cost per step and growth of the variance of time-averaging estimators as a function of dimension. We propose two new MCMC algorithms for computation in these models that have significantly improved performance compared to existing alternatives. One of the algorithms also approximates an expensive matrix product to give orders of magnitude speedup in high-dimensional applications. We prove guarantees for the accuracy of the approximate algorithm, and show that gradually decreasing the approximation error as the chain extends results in an exact algorithm. The scalability of the algorithm is illustrated in simulations with problem size as large as $N=5,000$ observations and $p=50,000$ predictors, and an application to a genome-wide association study with $N=2,267$ and $p=98,385$. The empirical results also show that the new algorithm yields estimates with lower mean squared error, intervals with better coverage, and elucidates features of the posterior that were often missed by previous algorithms in high dimensions, including bimodality of posterior marginals indicating uncertainty about which covariates belong in the model.




algorithms

A Global Benchmark of Algorithms for Segmenting Late Gadolinium-Enhanced Cardiac Magnetic Resonance Imaging. (arXiv:2004.12314v3 [cs.CV] UPDATED)

Segmentation of cardiac images, particularly late gadolinium-enhanced magnetic resonance imaging (LGE-MRI) widely used for visualizing diseased cardiac structures, is a crucial first step for clinical diagnosis and treatment. However, direct segmentation of LGE-MRIs is challenging due to its attenuated contrast. Since most clinical studies have relied on manual and labor-intensive approaches, automatic methods are of high interest, particularly optimized machine learning approaches. To address this, we organized the "2018 Left Atrium Segmentation Challenge" using 154 3D LGE-MRIs, currently the world's largest cardiac LGE-MRI dataset, and associated labels of the left atrium segmented by three medical experts, ultimately attracting the participation of 27 international teams. In this paper, extensive analysis of the submitted algorithms using technical and biological metrics was performed by undergoing subgroup analysis and conducting hyper-parameter analysis, offering an overall picture of the major design choices of convolutional neural networks (CNNs) and practical considerations for achieving state-of-the-art left atrium segmentation. Results show the top method achieved a dice score of 93.2% and a mean surface to a surface distance of 0.7 mm, significantly outperforming prior state-of-the-art. Particularly, our analysis demonstrated that double, sequentially used CNNs, in which a first CNN is used for automatic region-of-interest localization and a subsequent CNN is used for refined regional segmentation, achieved far superior results than traditional methods and pipelines containing single CNNs. This large-scale benchmarking study makes a significant step towards much-improved segmentation methods for cardiac LGE-MRIs, and will serve as an important benchmark for evaluating and comparing the future works in the field.




algorithms

Non-asymptotic Convergence Analysis of Two Time-scale (Natural) Actor-Critic Algorithms. (arXiv:2005.03557v1 [cs.LG])

As an important type of reinforcement learning algorithms, actor-critic (AC) and natural actor-critic (NAC) algorithms are often executed in two ways for finding optimal policies. In the first nested-loop design, actor's one update of policy is followed by an entire loop of critic's updates of the value function, and the finite-sample analysis of such AC and NAC algorithms have been recently well established. The second two time-scale design, in which actor and critic update simultaneously but with different learning rates, has much fewer tuning parameters than the nested-loop design and is hence substantially easier to implement. Although two time-scale AC and NAC have been shown to converge in the literature, the finite-sample convergence rate has not been established. In this paper, we provide the first such non-asymptotic convergence rate for two time-scale AC and NAC under Markovian sampling and with actor having general policy class approximation. We show that two time-scale AC requires the overall sample complexity at the order of $mathcal{O}(epsilon^{-2.5}log^3(epsilon^{-1}))$ to attain an $epsilon$-accurate stationary point, and two time-scale NAC requires the overall sample complexity at the order of $mathcal{O}(epsilon^{-4}log^2(epsilon^{-1}))$ to attain an $epsilon$-accurate global optimal point. We develop novel techniques for bounding the bias error of the actor due to dynamically changing Markovian sampling and for analyzing the convergence rate of the linear critic with dynamically changing base functions and transition kernel.




algorithms

Fair Algorithms for Hierarchical Agglomerative Clustering. (arXiv:2005.03197v1 [cs.LG])

Hierarchical Agglomerative Clustering (HAC) algorithms are extensively utilized in modern data science and machine learning, and seek to partition the dataset into clusters while generating a hierarchical relationship between the data samples themselves. HAC algorithms are employed in a number of applications, such as biology, natural language processing, and recommender systems. Thus, it is imperative to ensure that these algorithms are fair-- even if the dataset contains biases against certain protected groups, the cluster outputs generated should not be discriminatory against samples from any of these groups. However, recent work in clustering fairness has mostly focused on center-based clustering algorithms, such as k-median and k-means clustering. Therefore, in this paper, we propose fair algorithms for performing HAC that enforce fairness constraints 1) irrespective of the distance linkage criteria used, 2) generalize to any natural measures of clustering fairness for HAC, 3) work for multiple protected groups, and 4) have competitive running times to vanilla HAC. To the best of our knowledge, this is the first work that studies fairness for HAC algorithms. We also propose an algorithm with lower asymptotic time complexity than HAC algorithms that can rectify existing HAC outputs and make them subsequently fair as a result. Moreover, we carry out extensive experiments on multiple real-world UCI datasets to demonstrate the working of our algorithms.




algorithms

QoS routing algorithms for wireless sensor networks

Venugopal, K. R., Dr., author
9789811527203 (electronic bk.)




algorithms

Sparse high-dimensional regression: Exact scalable algorithms and phase transitions

Dimitris Bertsimas, Bart Van Parys.

Source: The Annals of Statistics, Volume 48, Number 1, 300--323.

Abstract:
We present a novel binary convex reformulation of the sparse regression problem that constitutes a new duality perspective. We devise a new cutting plane method and provide evidence that it can solve to provable optimality the sparse regression problem for sample sizes $n$ and number of regressors $p$ in the 100,000s, that is, two orders of magnitude better than the current state of the art, in seconds. The ability to solve the problem for very high dimensions allows us to observe new phase transition phenomena. Contrary to traditional complexity theory which suggests that the difficulty of a problem increases with problem size, the sparse regression problem has the property that as the number of samples $n$ increases the problem becomes easier in that the solution recovers 100% of the true signal, and our approach solves the problem extremely fast (in fact faster than Lasso), while for small number of samples $n$, our approach takes a larger amount of time to solve the problem, but importantly the optimal solution provides a statistically more relevant regressor. We argue that our exact sparse regression approach presents a superior alternative over heuristic methods available at present.




algorithms

Model assisted variable clustering: Minimax-optimal recovery and algorithms

Florentina Bunea, Christophe Giraud, Xi Luo, Martin Royer, Nicolas Verzelen.

Source: The Annals of Statistics, Volume 48, Number 1, 111--137.

Abstract:
The problem of variable clustering is that of estimating groups of similar components of a $p$-dimensional vector $X=(X_{1},ldots ,X_{p})$ from $n$ independent copies of $X$. There exists a large number of algorithms that return data-dependent groups of variables, but their interpretation is limited to the algorithm that produced them. An alternative is model-based clustering, in which one begins by defining population level clusters relative to a model that embeds notions of similarity. Algorithms tailored to such models yield estimated clusters with a clear statistical interpretation. We take this view here and introduce the class of $G$-block covariance models as a background model for variable clustering. In such models, two variables in a cluster are deemed similar if they have similar associations will all other variables. This can arise, for instance, when groups of variables are noise corrupted versions of the same latent factor. We quantify the difficulty of clustering data generated from a $G$-block covariance model in terms of cluster proximity, measured with respect to two related, but different, cluster separation metrics. We derive minimax cluster separation thresholds, which are the metric values below which no algorithm can recover the model-defined clusters exactly, and show that they are different for the two metrics. We therefore develop two algorithms, COD and PECOK, tailored to $G$-block covariance models, and study their minimax-optimality with respect to each metric. Of independent interest is the fact that the analysis of the PECOK algorithm, which is based on a corrected convex relaxation of the popular $K$-means algorithm, provides the first statistical analysis of such algorithms for variable clustering. Additionally, we compare our methods with another popular clustering method, spectral clustering. Extensive simulation studies, as well as our data analyses, confirm the applicability of our approach.




algorithms

Ants Have Algorithms




algorithms

Healthcare Algorithms Are Biased, and the Results Can Be Deadly

Deep-learning algorithms suffer from a fundamental problem: They can adopt unwanted biases from the data on which they're trained. In healthcare, this can lead to bad diagnoses and care recommendations.




algorithms

Botan C++ Crypto Algorithms Library 2.11.0

Botan is a C++ library of cryptographic algorithms, including AES, DES, SHA-1, RSA, DSA, Diffie-Hellman, and many others. It also supports X.509 certificates and CRLs, and PKCS #10 certificate requests, and has a high level filter/pipe message processing system. The library is easily portable to most systems and compilers, and includes a substantial tutorial and API reference. This is the current stable release.




algorithms

Botan C++ Crypto Algorithms Library 2.12.0

Botan is a C++ library of cryptographic algorithms, including AES, DES, SHA-1, RSA, DSA, Diffie-Hellman, and many others. It also supports X.509 certificates and CRLs, and PKCS #10 certificate requests, and has a high level filter/pipe message processing system. The library is easily portable to most systems and compilers, and includes a substantial tutorial and API reference. This is the current stable release.




algorithms

Botan C++ Crypto Algorithms Library 2.12.1

Botan is a C++ library of cryptographic algorithms, including AES, DES, SHA-1, RSA, DSA, Diffie-Hellman, and many others. It also supports X.509 certificates and CRLs, and PKCS #10 certificate requests, and has a high level filter/pipe message processing system. The library is easily portable to most systems and compilers, and includes a substantial tutorial and API reference. This is the current stable release.




algorithms

Botan C++ Crypto Algorithms Library 2.13.0

Botan is a C++ library of cryptographic algorithms, including AES, DES, SHA-1, RSA, DSA, Diffie-Hellman, and many others. It also supports X.509 certificates and CRLs, and PKCS #10 certificate requests, and has a high level filter/pipe message processing system. The library is easily portable to most systems and compilers, and includes a substantial tutorial and API reference. This is the current stable release.




algorithms

Botan C++ Crypto Algorithms Library 2.14.0

Botan is a C++ library of cryptographic algorithms, including AES, DES, SHA-1, RSA, DSA, Diffie-Hellman, and many others. It also supports X.509 certificates and CRLs, and PKCS #10 certificate requests, and has a high level filter/pipe message processing system. The library is easily portable to most systems and compilers, and includes a substantial tutorial and API reference. This is the current stable release.




algorithms

OpenSSL signature_algorithms_cert Denial Of Service

Proof of concept denial of service exploit for the recent OpenSSL signature_algorithms_cert vulnerability.




algorithms

New Algorithms Aim To Stamp Out Abuse On Twitter




algorithms

Fast Algorithms for Conducting Large-Scale GWAS of Age-at-Onset Traits Using Cox Mixed-Effects Models [Statistical Genetics and Genomics]

Age-at-onset is one of the critical traits in cohort studies of age-related diseases. Large-scale genome-wide association studies (GWAS) of age-at-onset traits can provide more insights into genetic effects on disease progression and transitions between stages. Moreover, proportional hazards (or Cox) regression models can achieve higher statistical power in a cohort study than a case-control trait using logistic regression. Although mixed-effects models are widely used in GWAS to correct for sample dependence, application of Cox mixed-effects models (CMEMs) to large-scale GWAS is so far hindered by intractable computational cost. In this work, we propose COXMEG, an efficient R package for conducting GWAS of age-at-onset traits using CMEMs. COXMEG introduces fast estimation algorithms for general sparse relatedness matrices including, but not limited to, block-diagonal pedigree-based matrices. COXMEG also introduces a fast and powerful score test for dense relatedness matrices, accounting for both population stratification and family structure. In addition, COXMEG generalizes existing algorithms to support positive semidefinite relatedness matrices, which are common in twin and family studies. Our simulation studies suggest that COXMEG, depending on the structure of the relatedness matrix, is orders of magnitude computationally more efficient than coxme and coxph with frailty for GWAS. We found that using sparse approximation of relatedness matrices yielded highly comparable results in controlling false-positive rate and retaining statistical power for an ethnically homogeneous family-based sample. By applying COXMEG to a study of Alzheimer’s disease (AD) with a Late-Onset Alzheimer’s Disease Family Study from the National Institute on Aging sample comprising 3456 non-Hispanic whites and 287 African Americans, we identified the APOE 4 variant with strong statistical power (P = 1e–101), far more significant than that reported in a previous study using a transformed variable and a marginal Cox model. Furthermore, we identified novel SNP rs36051450 (P = 2e–9) near GRAMD1B, the minor allele of which significantly reduced the hazards of AD in both genders. These results demonstrated that COXMEG greatly facilitates the application of CMEMs in GWAS of age-at-onset traits.




algorithms

Structured assessment of frailty in multiple myeloma as a paradigm of individualized treatment algorithms in cancer patients at advanced age




algorithms

How well can algorithms recognize your masked face?

There's a scramble to adapt to a world where people routinely cover their faces.




algorithms

Trade secrets shouldn’t shield tech companies’ algorithms from oversight

Technology companies increasingly hide the world’s most powerful algorithms and business models behind the shield of trade secret protection. The legitimacy of these protections needs to be revisited when they obscure companies’ impact on the public interest or the rule of law. In 2016 and 2018, the United States and the European Union each adopted…

       




algorithms

Trade secrets shouldn’t shield tech companies’ algorithms from oversight

Technology companies increasingly hide the world’s most powerful algorithms and business models behind the shield of trade secret protection. The legitimacy of these protections needs to be revisited when they obscure companies’ impact on the public interest or the rule of law. In 2016 and 2018, the United States and the European Union each adopted…

       




algorithms

Trade secrets shouldn’t shield tech companies’ algorithms from oversight

Technology companies increasingly hide the world’s most powerful algorithms and business models behind the shield of trade secret protection. The legitimacy of these protections needs to be revisited when they obscure companies’ impact on the public interest or the rule of law. In 2016 and 2018, the United States and the European Union each adopted…

       




algorithms

Algorithms and sentencing: What does due process require?

There are significant potential benefits to using data-driven risk assessments in criminal sentencing. For example, risk assessments have rightly been endorsed as a mechanism to enable courts to reduce or waive prison sentences for offenders who are very unlikely to reoffend. Multiple states have recently enacted laws requiring the use of risk assessment instruments. And…

       




algorithms

Undulating Root Bench is designed with computer algorithms

Emerging out of a park in Seoul, South Korea, this dynamic piece of urban furniture offers a place to sit, walk and play.




algorithms

Algorithms and competition: Friends or foes?

This article by OECD's Antonio Capobianco and Pedro Gonzaga focuses on whether algorithms can make tacit collusion easier, both in oligopolistic markets and in markets which do not manifest the structural features that are usually associated with the risk of collusion. It was published in the August 2017 edition of the CPI Chronicle.




algorithms

Algorithms and collusion: Competition policy in the digital age

The combination of big data with technologically advanced tools is changing the competitive landscape in many markets and sectors. While this is producing benefits and efficiencies, it is also raising concerns of possible anti-competitive behaviour. This paper looks at whether algorithms can make tacit collusion easier and discusses some of the challenges they present for both competition law enforcement and market regulation.




algorithms

New York ICE officials accused of rigging algorithms to detain immigrants in detention centers 

The New York Civil Liberties Union has filed a lawsuit against ICE claiming New York officers rigged algorithms to always recommend people arrested for immigration violations be detained.




algorithms

Pervasive Systems, Algorithms and Networks [Electronic book] : 16th International Symposium, I-SPAN 2019, Naples, Italy, September 16-20, 2019, Proceedings / Christian Esposito, Jiman Hong, Kim-Kwang Raymond Choo, editors.

Cham : Springer, 2019.




algorithms

Rescheduling Under Disruptions in Manufacturing Systems: Models and Algorithms / by Dujuan Wang, Yunqiang Yin, Yaochu Jin

Online Resource




algorithms

The nonlinear workbook : chaos, fractals, cellular automata, genetic algorithms, gene expression programming, support vector machine, wavelets, hidden Markov models, fuzzy logic with C++, Java and symbolic C++ programs / Willi-Hans Steeb, University of Jo

Steeb, W.-H




algorithms

Algorithms for computational biology: third International Conference, AlCoB 2016, Trujillo, Spain, June 21-22, 2016, Proceedings / María Botón-Fernández, Carlos Martín-Vide, Sergio Santander-Jiménez, Miguel A. Vega-Rodríguez

Online Resource




algorithms

Algorithms for computational biology: 4th International Conference, AlCoB 2017, Aveiro, Portugal, June 5-6, 2017, Proceedings / edited by Daniel Figueiredo, Carlos Martín-Vide, Diogo Pratas, Miguel A. Vega-Rodríguez (eds.)

Online Resource




algorithms

Algorithms for computational biology: 5th International Conference, AlCoB 2018, Hong Kong, China, June 25-26, 2018, Proceedings / Jesper Jansson, Carlos Martín-Vide, Miguel A. Vega-Rodríguez (eds.)

Online Resource




algorithms

Bioinformatics algorithms: design and implementation in Python / Miguel Rocha, Pedro G. Ferreira

Online Resource




algorithms

Algorithms for computational biology: 6th International Conference, AlCoB 2019, Berkeley, CA, USA, May 28-30, 2019, Proceedings / Ian Holmes, Carlos Martín-Vide, Miguel A. Vega-Rodríguez, editors

Online Resource