kernel

49. kernel-discuss Archives: This tutorial concludes the Basic Reportingsection.

ricerche inerenti Simply Uk Gadgets: - Regali Tesi - Bomboniera Capodimonte - Portachiavi. ... Sella cio simply uk gadgets . Type as you would on a normal ...




kernel

LXer: Upstream Linux 6.12 Makes It Easier To Build A Debug Kernel For Arch Linux

Published at LXer: The upstream Linux 6.11 kernel introduced the ability to easily produce a Pacman kernel package for Arch Linux with the new "make pacman-pkg" target. With Linux 6.12 new...



  • Syndicated Linux News

kernel

LXer: Early Linux 6.12 Kernel Benchmarks Showing Some Nice Gains On AMD Zen 5

Published at LXer: With the Linux 6.12 merge window wrapping up this weekend and the bulk of the new feature merges now in the tree, I've begun running some Linux 6.12 benchmarks. Here is an...



  • Syndicated Linux News

kernel

LXer: CachyOS ISO Release for September 2024 Brings Linux Kernel 6.11 and Optimizations

Published at LXer: The Arch Linux-based and KDE Plasma-focused CachyOS distribution has a new ISO release for September 2024 adding various performance improvements and optimizations across the...



  • Syndicated Linux News

kernel

LXer: Linus Torvalds Announces First Linux Kernel 6.12 Release Candidate

Published at LXer: Linus Torvalds announced today the general availability for public testing of the first Release Candidate (RC) development milestone of the upcoming Linux 6.12 kernel series. ...



  • Syndicated Linux News

kernel

LXer: Linux Kernel 6.12 RC1 Released: PREEMPT_RT Mainlined and Sched_ext Merged

Published at LXer: Linus Torvalds announced the release of Linux Kernel 6.12 RC1. Kernel 6.12 RC1 brings important new features like PREEMPT_RT and sched_ext. Read More......



  • Syndicated Linux News

kernel

LXer: Linux kernel 6.11 lands with vintage TV support

Published at LXer: Released remotely from Vienna, Linux kernel 6.11 is here, with improved monochrome TV support. Yes, in 2024. Emperor penguin Linus Torvalds was attending the Open Source Summit...



  • Syndicated Linux News

kernel

Android 15 QPR2 brings the newest Linux kernel to all tensor-powered phones and tablets - Android Police

  1. Android 15 QPR2 brings the newest Linux kernel to all tensor-powered phones and tablets  Android Police
  2. Here’s everything new in Android 15 QPR2 Beta 1 [Gallery]  9to5Google
  3. Your Google Pixel Phone's Newest Android 15 Beta Update Arrived  Droid Life
  4. Google is preparing to bring back a beloved customization feature from Android 11  Android Authority
  5. Android 15 QPR2 beta 1 release includes major upgrade for Tensor-powered Pixels  PhoneArena




kernel

Study of physico-mechanical properties of concretes based on palm kernel shells originating from the locality of Haut Nkam in Cameroon

This study is based on the use of palm kernel shells as aggregate in the manufacture of concrete. Several (0, 25, 50, 75 and 100%) substitutions were used in the volume fraction of the aggregates. In order to evaluate the effect of this substitution, the mechanical properties at 7 and 28 days for compression was determine, 28 days for bending and then the physical properties of fresh and har...




kernel

How a signed driver exposed users to kernel-level threats – Week in Security with Tony Anscombe

A purported ad blocker marketed as a security solution leverages a Microsoft-signed driver that inadvertently exposes victims to dangerous threats




kernel

Re: CVE-2024-36905: Linux kernel: Divide-by-zero on shutdown of TCP_SYN_RECV sockets

Posted by Solar Designer on Nov 12

NIST doesn't appear to provide their own CVSS vectors/scores lately.
However, they republish (with attribution) some third-party ones, this
time from CISA-ADP. The CISA-ADP CVSS vector for this vulnerability
specifies that it not only is network-reachable, but also that it has
High impact not only on Availability, but also on Confidentiality and
Integrity. This results in a CVSSv3.1 score of 9.8. Even merely
correcting the vector not to...




kernel

Re: CVE-2024-36905: Linux kernel: Divide-by-zero on shutdown of TCP_SYN_RECV sockets

Posted by Clemens Lang on Nov 12

Hi,

I think the source for the CISA-ADP data is at [1]. For this specific CVE, the relevant file would be [2]. Their readme
has a section at the bottom, where they encourage feedback:

I’m aware of at last one prior case where a similar case of (IMHO) overblown CVSS scores was discussed in an issue on
this particular GitHub project [3].

Somebody seems to already have opened a ticket for this CVE, too: [4]

[1]:...




kernel

RE: CVE-2024-36905: Linux kernel: Divide-by-zero on shutdown of TCP_SYN_RECV sockets

Posted by Joel GUITTET on Nov 12

Hello
First thanks to Alexander for reposting because I was not able to do so!
You're right Clemens, I have myself ask the question on this github
(https://github.com/cisagov/vulnrichment/issues/130), but still no information for the moment.
Joel




kernel

SE-Radio Episode 271: Idit Levine on Unikernelsl

Jeff Meyerson talks to Idit Levine about Unikernels and unik, a project for compiling unikernels. The Linux kernel contains features that may be unnecessary to many application developers--particularly if those developers are deploying to the cloud. Unikernels allow programmers to specify the minimum features of an operating system we need to deploy our applications. Topics include the the Linux kernel, requirements for a cloud operating system, and how unikernels compare to Docker containers.





kernel

Cashew exporters concerned over surge in fraudulent imports of kernel

India produces 6-7 million tonne raw cashew per annum, and was till recently the leading supplier of kernels to the global markets.




kernel

Photos: Cedar Rapids Kernels offer curbside ballpark food to fans

The team will be offering carry-out ballpark food to fans on Fridays with orders placed during business hours on Tuesdays and Wednesdays




kernel

No baseball right now, but Cedar Rapids Kernels offering a bit of the ballpark taste

CEDAR RAPIDS — You weren’t taken out to the ballgame or the crowd. You couldn’t get Cracker Jack, though you could get peanuts. Not to mention hot dogs and bacon cheeseburgers, a...



  • Minor League Sports

kernel

Compression, inversion, and approximate PCA of dense kernel matrices at near-linear computational complexity. (arXiv:1706.02205v4 [math.NA] UPDATED)

Dense kernel matrices $Theta in mathbb{R}^{N imes N}$ obtained from point evaluations of a covariance function $G$ at locations ${ x_{i} }_{1 leq i leq N} subset mathbb{R}^{d}$ arise in statistics, machine learning, and numerical analysis. For covariance functions that are Green's functions of elliptic boundary value problems and homogeneously-distributed sampling points, we show how to identify a subset $S subset { 1 , dots , N }^2$, with $# S = O ( N log (N) log^{d} ( N /epsilon ) )$, such that the zero fill-in incomplete Cholesky factorisation of the sparse matrix $Theta_{ij} 1_{( i, j ) in S}$ is an $epsilon$-approximation of $Theta$. This factorisation can provably be obtained in complexity $O ( N log( N ) log^{d}( N /epsilon) )$ in space and $O ( N log^{2}( N ) log^{2d}( N /epsilon) )$ in time, improving upon the state of the art for general elliptic operators; we further present numerical evidence that $d$ can be taken to be the intrinsic dimension of the data set rather than that of the ambient space. The algorithm only needs to know the spatial configuration of the $x_{i}$ and does not require an analytic representation of $G$. Furthermore, this factorization straightforwardly provides an approximate sparse PCA with optimal rate of convergence in the operator norm. Hence, by using only subsampling and the incomplete Cholesky factorization, we obtain, at nearly linear complexity, the compression, inversion and approximate PCA of a large class of covariance matrices. By inverting the order of the Cholesky factorization we also obtain a solver for elliptic PDE with complexity $O ( N log^{d}( N /epsilon) )$ in space and $O ( N log^{2d}( N /epsilon) )$ in time, improving upon the state of the art for general elliptic operators.




kernel

Dynamic rule management for kernel mode filter drivers

A method for providing rules for a plurality of processes from a user mode to a kernel mode of a computer is disclosed. The method includes providing to the kernel mode a policy for at least a first process of the plurality of processes, the policy indicating at least when and/or how notifications are to be provided from the kernel mode to the user mode upon detection in the kernel mode of launching of the first process. The method further includes selecting, from the rules stored in the user mode, rules related to the launching of the first process, in response to receiving from the kernel mode a first notification in accordance with the policy, and providing the selected rules related to the launching of the first process from the user mode to at least one of the one or more filter drivers in the kernel mode.




kernel

SYSTEMS AND METHODS FOR UNIT TESTING OF FUNCTIONS ON REMOTE KERNELS

The disclosed computer-implemented method may include (1) providing a framework that includes (A) a user-space component that runs at a client site and (B) a kernel-space component that runs at a remote site, (2) identifying attributes of objects that reside at the remote site and whose addresses are unknown at the client site, (3) generating a script to test a function of a kernel running on the remote site based at least in part on the attributes, and (4) performing a remote unit testing of the function of the kernel by executing the script such that the user-space component (A) generates a message that identifies the attributes and (B) sends the message to the kernel-space component to facilitate (I) obtaining references to the objects by way of the attributes and (II) invoking the function by way of the references. Various other methods, systems, and computer-readable media are also disclosed.




kernel

Using N_Port ID Virtualization (NPIV) with kernel-based virtual machine (KVM) guests on IBM Power servers

This article provides the basic steps to use N-Port ID Virtualization (NPIV) technology in a kernel-based virtual machine (KVM) guest. Additionally, the article also provides the significance of NPIV allowing multiple guests to make use of a single physical host bus adapter (HBA) to access multiple storage devices.




kernel

Detecting Linux kernel process masquerading with command line forensics

Guest Post: Learn how to use Linux command line to investigate suspicious processes trying to masquerade as kernel threads.



  • <a href="https://blog.apnic.net/category/tech-matters/">Tech matters</a>

kernel

Spectral analysis and representation of solutions of integro-differential equations with fractional exponential kernels

V. V. Vlasov and N. A. Rautian
Trans. Moscow Math. Soc. 80 (2020), 169-188.
Abstract, references and article information




kernel

Nonlinear ????-term approximation of harmonic functions from shifts of the Newtonian kernel

Kamen G. Ivanov and Pencho Petrushev
Trans. Amer. Math. Soc. 373 (2020), 3117-3176.
Abstract, references and article information




kernel

On the predictive potential of kernel principal components

Ben Jones, Andreas Artemiou, Bing Li.

Source: Electronic Journal of Statistics, Volume 14, Number 1, 1--23.

Abstract:
We give a probabilistic analysis of a phenomenon in statistics which, until recently, has not received a convincing explanation. This phenomenon is that the leading principal components tend to possess more predictive power for a response variable than lower-ranking ones despite the procedure being unsupervised. Our result, in its most general form, shows that the phenomenon goes far beyond the context of linear regression and classical principal components — if an arbitrary distribution for the predictor $X$ and an arbitrary conditional distribution for $Yvert X$ are chosen then any measureable function $g(Y)$, subject to a mild condition, tends to be more correlated with the higher-ranking kernel principal components than with the lower-ranking ones. The “arbitrariness” is formulated in terms of unitary invariance then the tendency is explicitly quantified by exploring how unitary invariance relates to the Cauchy distribution. The most general results, for technical reasons, are shown for the case where the kernel space is finite dimensional. The occurency of this tendency in real world databases is also investigated to show that our results are consistent with observation.




kernel

On lp-Support Vector Machines and Multidimensional Kernels

In this paper, we extend the methodology developed for Support Vector Machines (SVM) using the $ell_2$-norm ($ell_2$-SVM) to the more general case of $ell_p$-norms with $p>1$ ($ell_p$-SVM). We derive second order cone formulations for the resulting dual and primal problems. The concept of kernel function, widely applied in $ell_2$-SVM, is extended to the more general case of $ell_p$-norms with $p>1$ by defining a new operator called multidimensional kernel. This object gives rise to reformulations of dual problems, in a transformed space of the original data, where the dependence on the original data always appear as homogeneous polynomials. We adapt known solution algorithms to efficiently solve the primal and dual resulting problems and some computational experiments on real-world datasets are presented showing rather good behavior in terms of the accuracy of $ell_p$-SVM with $p>1$.




kernel

A Convex Parametrization of a New Class of Universal Kernel Functions

The accuracy and complexity of kernel learning algorithms is determined by the set of kernels over which it is able to optimize. An ideal set of kernels should: admit a linear parameterization (tractability); be dense in the set of all kernels (accuracy); and every member should be universal so that the hypothesis space is infinite-dimensional (scalability). Currently, there is no class of kernel that meets all three criteria - e.g. Gaussians are not tractable or accurate; polynomials are not scalable. We propose a new class that meet all three criteria - the Tessellated Kernel (TK) class. Specifically, the TK class: admits a linear parameterization using positive matrices; is dense in all kernels; and every element in the class is universal. This implies that the use of TK kernels for learning the kernel can obviate the need for selecting candidate kernels in algorithms such as SimpleMKL and parameters such as the bandwidth. Numerical testing on soft margin Support Vector Machine (SVM) problems show that algorithms using TK kernels outperform other kernel learning algorithms and neural networks. Furthermore, our results show that when the ratio of the number of training data to features is high, the improvement of TK over MKL increases significantly.




kernel

GraKeL: A Graph Kernel Library in Python

The problem of accurately measuring the similarity between graphs is at the core of many applications in a variety of disciplines. Graph kernels have recently emerged as a promising approach to this problem. There are now many kernels, each focusing on different structural aspects of graphs. Here, we present GraKeL, a library that unifies several graph kernels into a common framework. The library is written in Python and adheres to the scikit-learn interface. It is simple to use and can be naturally combined with scikit-learn's modules to build a complete machine learning pipeline for tasks such as graph classification and clustering. The code is BSD licensed and is available at: https://github.com/ysig/GraKeL.




kernel

Conjugate Gradients for Kernel Machines

Regularized least-squares (kernel-ridge / Gaussian process) regression is a fundamental algorithm of statistics and machine learning. Because generic algorithms for the exact solution have cubic complexity in the number of datapoints, large datasets require to resort to approximations. In this work, the computation of the least-squares prediction is itself treated as a probabilistic inference problem. We propose a structured Gaussian regression model on the kernel function that uses projections of the kernel matrix to obtain a low-rank approximation of the kernel and the matrix. A central result is an enhanced way to use the method of conjugate gradients for the specific setting of least-squares regression as encountered in machine learning.




kernel

The weight function in the subtree kernel is decisive

Tree data are ubiquitous because they model a large variety of situations, e.g., the architecture of plants, the secondary structure of RNA, or the hierarchy of XML files. Nevertheless, the analysis of these non-Euclidean data is difficult per se. In this paper, we focus on the subtree kernel that is a convolution kernel for tree data introduced by Vishwanathan and Smola in the early 2000's. More precisely, we investigate the influence of the weight function from a theoretical perspective and in real data applications. We establish on a 2-classes stochastic model that the performance of the subtree kernel is improved when the weight of leaves vanishes, which motivates the definition of a new weight function, learned from the data and not fixed by the user as usually done. To this end, we define a unified framework for computing the subtree kernel from ordered or unordered trees, that is particularly suitable for tuning parameters. We show through eight real data classification problems the great efficiency of our approach, in particular for small data sets, which also states the high importance of the weight function. Finally, a visualization tool of the significant features is derived.




kernel

The theory and application of penalized methods or Reproducing Kernel Hilbert Spaces made easy

Nancy Heckman

Source: Statist. Surv., Volume 6, 113--141.

Abstract:
The popular cubic smoothing spline estimate of a regression function arises as the minimizer of the penalized sum of squares $sum_{j}(Y_{j}-mu(t_{j}))^{2}+lambda int_{a}^{b}[mu''(t)]^{2},dt$, where the data are $t_{j},Y_{j}$, $j=1,ldots,n$. The minimization is taken over an infinite-dimensional function space, the space of all functions with square integrable second derivatives. But the calculations can be carried out in a finite-dimensional space. The reduction from minimizing over an infinite dimensional space to minimizing over a finite dimensional space occurs for more general objective functions: the data may be related to the function $mu$ in another way, the sum of squares may be replaced by a more suitable expression, or the penalty, $int_{a}^{b}[mu''(t)]^{2},dt$, might take a different form. This paper reviews the Reproducing Kernel Hilbert Space structure that provides a finite-dimensional solution for a general minimization problem. Particular attention is paid to the construction and study of the Reproducing Kernel Hilbert Space corresponding to a penalty based on a linear differential operator. In this case, one can often calculate the minimizer explicitly, using Green’s functions.




kernel

Primal and dual model representations in kernel-based learning

Johan A.K. Suykens, Carlos Alzate, Kristiaan Pelckmans

Source: Statist. Surv., Volume 4, 148--183.

Abstract:
This paper discusses the role of primal and (Lagrange) dual model representations in problems of supervised and unsupervised learning. The specification of the estimation problem is conceived at the primal level as a constrained optimization problem. The constraints relate to the model which is expressed in terms of the feature map. From the conditions for optimality one jointly finds the optimal model representation and the model estimate. At the dual level the model is expressed in terms of a positive definite kernel function, which is characteristic for a support vector machine methodology. It is discussed how least squares support vector machines are playing a central role as core models across problems of regression, classification, principal component analysis, spectral clustering, canonical correlation analysis, dimensionality reduction and data visualization.




kernel

Fast multivariate empirical cumulative distribution function with connection to kernel density estimation. (arXiv:2005.03246v1 [cs.DS])

This paper revisits the problem of computing empirical cumulative distribution functions (ECDF) efficiently on large, multivariate datasets. Computing an ECDF at one evaluation point requires $mathcal{O}(N)$ operations on a dataset composed of $N$ data points. Therefore, a direct evaluation of ECDFs at $N$ evaluation points requires a quadratic $mathcal{O}(N^2)$ operations, which is prohibitive for large-scale problems. Two fast and exact methods are proposed and compared. The first one is based on fast summation in lexicographical order, with a $mathcal{O}(N{log}N)$ complexity and requires the evaluation points to lie on a regular grid. The second one is based on the divide-and-conquer principle, with a $mathcal{O}(Nlog(N)^{(d-1){vee}1})$ complexity and requires the evaluation points to coincide with the input points. The two fast algorithms are described and detailed in the general $d$-dimensional case, and numerical experiments validate their speed and accuracy. Secondly, the paper establishes a direct connection between cumulative distribution functions and kernel density estimation (KDE) for a large class of kernels. This connection paves the way for fast exact algorithms for multivariate kernel density estimation and kernel regression. Numerical tests with the Laplacian kernel validate the speed and accuracy of the proposed algorithms. A broad range of large-scale multivariate density estimation, cumulative distribution estimation, survival function estimation and regression problems can benefit from the proposed numerical methods.




kernel

Active Learning with Multiple Kernels. (arXiv:2005.03188v1 [cs.LG])

Online multiple kernel learning (OMKL) has provided an attractive performance in nonlinear function learning tasks. Leveraging a random feature approximation, the major drawback of OMKL, known as the curse of dimensionality, has been recently alleviated. In this paper, we introduce a new research problem, termed (stream-based) active multiple kernel learning (AMKL), in which a learner is allowed to label selected data from an oracle according to a selection criterion. This is necessary in many real-world applications as acquiring true labels is costly or time-consuming. We prove that AMKL achieves an optimal sublinear regret, implying that the proposed selection criterion indeed avoids unuseful label-requests. Furthermore, we propose AMKL with an adaptive kernel selection (AMKL-AKS) in which irrelevant kernels can be excluded from a kernel dictionary 'on the fly'. This approach can improve the efficiency of active learning as well as the accuracy of a function approximation. Via numerical tests with various real datasets, it is demonstrated that AMKL-AKS yields a similar or better performance than the best-known OMKL, with a smaller number of labeled data.




kernel

Scalable high-resolution forecasting of sparse spatiotemporal events with kernel methods: A winning solution to the NIJ “Real-Time Crime Forecasting Challenge”

Seth Flaxman, Michael Chirico, Pau Pereira, Charles Loeffler.

Source: The Annals of Applied Statistics, Volume 13, Number 4, 2564--2585.

Abstract:
We propose a generic spatiotemporal event forecasting method which we developed for the National Institute of Justice’s (NIJ) Real-Time Crime Forecasting Challenge (National Institute of Justice (2017)). Our method is a spatiotemporal forecasting model combining scalable randomized Reproducing Kernel Hilbert Space (RKHS) methods for approximating Gaussian processes with autoregressive smoothing kernels in a regularized supervised learning framework. While the smoothing kernels capture the two main approaches in current use in the field of crime forecasting, kernel density estimation (KDE) and self-exciting point process (SEPP) models, the RKHS component of the model can be understood as an approximation to the popular log-Gaussian Cox Process model. For inference, we discretize the spatiotemporal point pattern and learn a log-intensity function using the Poisson likelihood and highly efficient gradient-based optimization methods. Model hyperparameters including quality of RKHS approximation, spatial and temporal kernel lengthscales, number of autoregressive lags and bandwidths for smoothing kernels as well as cell shape, size and rotation, were learned using cross validation. Resulting predictions significantly exceeded baseline KDE estimates and SEPP models for sparse events.




kernel

Kernel and wavelet density estimators on manifolds and more general metric spaces

Galatia Cleanthous, Athanasios G. Georgiadis, Gerard Kerkyacharian, Pencho Petrushev, Dominique Picard.

Source: Bernoulli, Volume 26, Number 3, 1832--1862.

Abstract:
We consider the problem of estimating the density of observations taking values in classical or nonclassical spaces such as manifolds and more general metric spaces. Our setting is quite general but also sufficiently rich in allowing the development of smooth functional calculus with well localized spectral kernels, Besov regularity spaces, and wavelet type systems. Kernel and both linear and nonlinear wavelet density estimators are introduced and studied. Convergence rates for these estimators are established and discussed.




kernel

Adaptive Bayesian Nonparametric Regression Using a Kernel Mixture of Polynomials with Application to Partial Linear Models

Fangzheng Xie, Yanxun Xu.

Source: Bayesian Analysis, Volume 15, Number 1, 159--186.

Abstract:
We propose a kernel mixture of polynomials prior for Bayesian nonparametric regression. The regression function is modeled by local averages of polynomials with kernel mixture weights. We obtain the minimax-optimal contraction rate of the full posterior distribution up to a logarithmic factor by estimating metric entropies of certain function classes. Under the assumption that the degree of the polynomials is larger than the unknown smoothness level of the true function, the posterior contraction behavior can adapt to this smoothness level provided an upper bound is known. We also provide a frequentist sieve maximum likelihood estimator with a near-optimal convergence rate. We further investigate the application of the kernel mixture of polynomials to partial linear models and obtain both the near-optimal rate of contraction for the nonparametric component and the Bernstein-von Mises limit (i.e., asymptotic normality) of the parametric component. The proposed method is illustrated with numerical examples and shows superior performance in terms of computational efficiency, accuracy, and uncertainty quantification compared to the local polynomial regression, DiceKriging, and the robust Gaussian stochastic process.




kernel

A Kernel Regression Procedure in the 3D Shape Space with an Application to Online Sales of Children’s Wear

Gregorio Quintana-Ortí, Amelia Simó.

Source: Statistical Science, Volume 34, Number 2, 236--252.

Abstract:
This paper is focused on kernel regression when the response variable is the shape of a 3D object represented by a configuration matrix of landmarks. Regression methods on this shape space are not trivial because this space has a complex finite-dimensional Riemannian manifold structure (non-Euclidean). Papers about it are scarce in the literature, the majority of them are restricted to the case of a single explanatory variable, and many of them are based on the approximated tangent space. In this paper, there are several methodological innovations. The first one is the adaptation of the general method for kernel regression analysis in manifold-valued data to the three-dimensional case of Kendall’s shape space. The second one is its generalization to the multivariate case and the addressing of the curse-of-dimensionality problem. Finally, we propose bootstrap confidence intervals for prediction. A simulation study is carried out to check the goodness of the procedure, and a comparison with a current approach is performed. Then, it is applied to a 3D database obtained from an anthropometric survey of the Spanish child population with a potential application to online sales of children’s wear.




kernel

Linux Kernel v2.4 Released




kernel

Linux Kernel 2.2/2.4 Local Root Ptrace Vulnerability





kernel

Security Flaws Force Linux Kernel Upgrade




kernel

Controlling The Kernel - Its All About DRM




kernel

Vista Kernel Fix Worse Than Useless







kernel

Ubuntu Issues Security Patch For Kernel Flaw




kernel

David Kernell Photo - Rep. Mike Kernell Son Sarah Palin Anonymous Hacker?