earn

The Hardest Languages to Learn (for English and Non-English Speakers)

The English language is challenging due to complicated grammar, inconsistent sentence structure and colloquial idioms that it doesn't share with related languages. However, English is a target language that sees significantly more resources and opportunities for immersion than many other languages.




earn

What Is a Hybrid Car? Learn How Hybrid Vehicles Work

How does a hybrid car improve your gas mileage? And more importantly, does it pollute less just because it gets better gas mileage? Learn how hybrids work, plus get tips on how to drive a hybrid car for maximum efficiency.




earn

Let's Learn Mimetic Words in Korean!


Fall is the season of festivals and baseball here in Korea. What are some interesting festivals taking place around the country? What is the deal with the first pitch by celebrities at the baseball...

[more...]




earn

Automated selection of nanoparticle models for small-angle X-ray scattering data analysis using machine learning

Small-angle X-ray scattering (SAXS) is widely used to analyze the shape and size of nanoparticles in solution. A multitude of models, describing the SAXS intensity resulting from nanoparticles of various shapes, have been developed by the scientific community and are used for data analysis. Choosing the optimal model is a crucial step in data analysis, which can be difficult and time-consuming, especially for non-expert users. An algorithm is proposed, based on machine learning, representation learning and SAXS-specific preprocessing methods, which instantly selects the nanoparticle model best suited to describe SAXS data. The different algorithms compared are trained and evaluated on a simulated database. This database includes 75 000 scattering spectra from nine nanoparticle models, and realistically simulates two distinct device configurations. It will be made freely available to serve as a basis of comparison for future work. Deploying a universal solution for automatic nanoparticle model selection is a challenge made more difficult by the diversity of SAXS instruments and their flexible settings. The poor transferability of classification rules learned on one device configuration to another is highlighted. It is shown that training on several device configurations enables the algorithm to be generalized, without degrading performance compared with configuration-specific training. Finally, the classification algorithm is evaluated on a real data set obtained by performing SAXS experiments on nanoparticles for each of the instrumental configurations, which have been characterized by transmission electron microscopy. This data set, although very limited, allows estimation of the transferability of the classification rules learned on simulated data to real data.




earn

Integrating machine learning interatomic potentials with hybrid reverse Monte Carlo structure refinements in RMCProfile

New software capabilities in RMCProfile allow researchers to study the structure of materials by combining machine learning interatomic potentials and reverse Monte Carlo.




earn

Integrating machine learning interatomic potentials with hybrid reverse Monte Carlo structure refinements in RMCProfile

Structure refinement with reverse Monte Carlo (RMC) is a powerful tool for interpreting experimental diffraction data. To ensure that the under-constrained RMC algorithm yields reasonable results, the hybrid RMC approach applies interatomic potentials to obtain solutions that are both physically sensible and in agreement with experiment. To expand the range of materials that can be studied with hybrid RMC, we have implemented a new interatomic potential constraint in RMCProfile that grants flexibility to apply potentials supported by the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) molecular dynamics code. This includes machine learning interatomic potentials, which provide a pathway to applying hybrid RMC to materials without currently available interatomic potentials. To this end, we present a methodology to use RMC to train machine learning interatomic potentials for hybrid RMC applications.




earn

Influence of device configuration and noise on a machine learning predictor for the selection of nanoparticle small-angle X-ray scattering models

Small-angle X-ray scattering (SAXS) is a widely used method for nanoparticle characterization. A common approach to analysing nanoparticles in solution by SAXS involves fitting the curve using a parametric model that relates real-space parameters, such as nanoparticle size and electron density, to intensity values in reciprocal space. Selecting the optimal model is a crucial step in terms of analysis quality and can be time-consuming and complex. Several studies have proposed effective methods, based on machine learning, to automate the model selection step. Deploying these methods in software intended for both researchers and industry raises several issues. The diversity of SAXS instrumentation requires assessment of the robustness of these methods on data from various machine configurations, involving significant variations in the q-space ranges and highly variable signal-to-noise ratios (SNR) from one data set to another. In the case of laboratory instrumentation, data acquisition can be time-consuming and there is no universal criterion for defining an optimal acquisition time. This paper presents an approach that revisits the nanoparticle model selection method proposed by Monge et al. [Acta Cryst. (2024), A80, 202–212], evaluating and enhancing its robustness on data from device configurations not seen during training, by expanding the data set used for training. The influence of SNR on predictor robustness is then assessed, improved, and used to propose a stopping criterion for optimizing the trade-off between exposure time and data quality.




earn

Deep learning to overcome Zernike phase-contrast nanoCT artifacts for automated micro-nano porosity segmentation in bone

Bone material contains a hierarchical network of micro- and nano-cavities and channels, known as the lacuna-canalicular network (LCN), that is thought to play an important role in mechanobiology and turnover. The LCN comprises micrometer-sized lacunae, voids that house osteocytes, and submicrometer-sized canaliculi that connect bone cells. Characterization of this network in three dimensions is crucial for many bone studies. To quantify X-ray Zernike phase-contrast nanotomography data, deep learning is used to isolate and assess porosity in artifact-laden tomographies of zebrafish bones. A technical solution is proposed to overcome the halo and shade-off domains in order to reliably obtain the distribution and morphology of the LCN in the tomographic data. Convolutional neural network (CNN) models are utilized with increasing numbers of images, repeatedly validated by `error loss' and `accuracy' metrics. U-Net and Sensor3D CNN models were trained on data obtained from two different synchrotron Zernike phase-contrast transmission X-ray microscopes, the ANATOMIX beamline at SOLEIL (Paris, France) and the P05 beamline at PETRA III (Hamburg, Germany). The Sensor3D CNN model with a smaller batch size of 32 and a training data size of 70 images showed the best performance (accuracy 0.983 and error loss 0.032). The analysis procedures, validated by comparison with human-identified ground-truth images, correctly identified the voids within the bone matrix. This proposed approach may have further application to classify structures in volumetric images that contain non-linear artifacts that degrade image quality and hinder feature identification.




earn

Automated spectrometer alignment via machine learning

During beam time at a research facility, alignment and optimization of instrumentation, such as spectrometers, is a time-intensive task and often needs to be performed multiple times throughout the operation of an experiment. Despite the motorization of individual components, automated alignment solutions are not always available. In this study, a novel approach that combines optimisers with neural network surrogate models to significantly reduce the alignment overhead for a mobile soft X-ray spectrometer is proposed. Neural networks were trained exclusively using simulated ray-tracing data, and the disparity between experiment and simulation was obtained through parameter optimization. Real-time validation of this process was performed using experimental data collected at the beamline. The results demonstrate the ability to reduce alignment time from one hour to approximately five minutes. This method can also be generalized beyond spectrometers, for example, towards the alignment of optical elements at beamlines, making it applicable to a broad spectrum of research facilities.




earn

Revealing the structure of the active sites for the electrocatalytic CO2 reduction to CO over Co single atom catalysts using operando XANES and machine learning

Transition-metal nitro­gen-doped carbons (TM-N-C) are emerging as a highly promising catalyst class for several important electrocatalytic processes, including the electrocatalytic CO2 reduction reaction (CO2RR). The unique local environment around the singly dispersed metal site in TM-N-C catalysts is likely to be responsible for their catalytic properties, which differ significantly from those of bulk or nanostructured catalysts. However, the identification of the actual working structure of the main active units in TM-N-C remains a challenging task due to the fluctional, dynamic nature of these catalysts, and scarcity of experimental techniques that could probe the structure of these materials under realistic working conditions. This issue is addressed in this work and the local atomistic and electronic structure of the metal site in a Co–N–C catalyst for CO2RR is investigated by employing time-resolved operando X-ray absorption spectroscopy (XAS) combined with advanced data analysis techniques. This multi-step approach, based on principal component analysis, spectral decomposition and supervised machine learning methods, allows the contributions of several co-existing species in the working Co–N–C catalysts to be decoupled, and their XAS spectra deciphered, paving the way for understanding the CO2RR mechanisms in the Co–N–C catalysts, and further optimization of this class of electrocatalytic systems.




earn

X-ray lens figure errors retrieved by deep learning from several beam intensity images

The phase problem in the context of focusing synchrotron beams with X-ray lenses is addressed. The feasibility of retrieving the surface error of a lens system by using only the intensity of the propagated beam at several distances is demonstrated. A neural network, trained with a few thousand simulations using random errors, can predict accurately the lens error profile that accounts for all aberrations. It demonstrates the feasibility of routinely measuring the aberrations induced by an X-ray lens, or another optical system, using only a few intensity images.




earn

Deep-learning map segmentation for protein X-ray crystallographic structure determination

When solving a structure of a protein from single-wavelength anomalous diffraction X-ray data, the initial phases obtained by phasing from an anomalously scattering substructure usually need to be improved by an iterated electron-density modification. In this manuscript, the use of convolutional neural networks (CNNs) for segmentation of the initial experimental phasing electron-density maps is proposed. The results reported demonstrate that a CNN with U-net architecture, trained on several thousands of electron-density maps generated mainly using X-ray data from the Protein Data Bank in a supervised learning, can improve current density-modification methods.




earn

Robust and automatic beamstop shadow outlier rejection: combining crystallographic statistics with modern clustering under a semi-supervised learning strategy

During the automatic processing of crystallographic diffraction experiments, beamstop shadows are often unaccounted for or only partially masked. As a result of this, outlier reflection intensities are integrated, which is a known issue. Traditional statistical diagnostics have only limited effectiveness in identifying these outliers, here termed Not-Excluded-unMasked-Outliers (NEMOs). The diagnostic tool AUSPEX allows visual inspection of NEMOs, where they form a typical pattern: clusters at the low-resolution end of the AUSPEX plots of intensities or amplitudes versus resolution. To automate NEMO detection, a new algorithm was developed by combining data statistics with a density-based clustering method. This approach demonstrates a promising performance in detecting NEMOs in merged data sets without disrupting existing data-reduction pipelines. Re-refinement results indicate that excluding the identified NEMOs can effectively enhance the quality of subsequent structure-determination steps. This method offers a prospective automated means to assess the efficacy of a beamstop mask, as well as highlighting the potential of modern pattern-recognition techniques for automating outlier exclusion during data processing, facilitating future adaptation to evolving experimental strategies.




earn

CHiMP: deep-learning tools trained on protein crystallization micrographs to enable automation of experiments

A group of three deep-learning tools, referred to collectively as CHiMP (Crystal Hits in My Plate), were created for analysis of micrographs of protein crystallization experiments at the Diamond Light Source (DLS) synchrotron, UK. The first tool, a classification network, assigns images into categories relating to experimental outcomes. The other two tools are networks that perform both object detection and instance segmentation, resulting in masks of individual crystals in the first case and masks of crystallization droplets in addition to crystals in the second case, allowing the positions and sizes of these entities to be recorded. The creation of these tools used transfer learning, where weights from a pre-trained deep-learning network were used as a starting point and repurposed by further training on a relatively small set of data. Two of the tools are now integrated at the VMXi macromolecular crystallography beamline at DLS, where they have the potential to absolve the need for any user input, both for monitoring crystallization experiments and for triggering in situ data collections. The third is being integrated into the XChem fragment-based drug-discovery screening platform, also at DLS, to allow the automatic targeting of acoustic compound dispensing into crystallization droplets.




earn

Dynamic X-ray speckle-tracking imaging with high-accuracy phase retrieval based on deep learning

Speckle-tracking X-ray imaging is an attractive candidate for dynamic X-ray imaging owing to its flexible setup and simultaneous yields of phase, transmission and scattering images. However, traditional speckle-tracking imaging methods suffer from phase distortion at locations with abrupt changes in density, which is always the case for real samples, limiting the applications of the speckle-tracking X-ray imaging method. In this paper, we report a deep-learning based method which can achieve dynamic X-ray speckle-tracking imaging with high-accuracy phase retrieval. The calibration results of a phantom show that the profile of the retrieved phase is highly consistent with the theoretical one. Experiments of polyurethane foaming demonstrated that the proposed method revealed the evolution of the complicated microstructure of the bubbles accurately. The proposed method is a promising solution for dynamic X-ray imaging with high-accuracy phase retrieval, and has extensive applications in metrology and quantitative analysis of dynamics in material science, physics, chemistry and biomedicine.




earn

The prediction of single-molecule magnet properties via deep learning

This paper uses deep learning to present a proof-of-concept for data-driven chemistry in single-molecule magnets (SMMs). Previous discussions within SMM research have proposed links between molecular structures (crystal structures) and single-molecule magnetic properties; however, these have only interpreted the results. Therefore, this study introduces a data-driven approach to predict the properties of SMM structures using deep learning. The deep-learning model learns the structural features of the SMM molecules by extracting the single-molecule magnetic properties from the 3D coordinates presented in this paper. The model accurately determined whether a molecule was a single-molecule magnet, with an accuracy rate of approximately 70% in predicting the SMM properties. The deep-learning model found SMMs from 20 000 metal complexes extracted from the Cambridge Structural Database. Using deep-learning models for predicting SMM properties and guiding the design of novel molecules is promising.




earn

Using deep-learning predictions reveals a large number of register errors in PDB depositions

The accuracy of the information in the Protein Data Bank (PDB) is of great importance for the myriad downstream applications that make use of protein structural information. Despite best efforts, the occasional introduction of errors is inevitable, especially where the experimental data are of limited resolution. A novel protein structure validation approach based on spotting inconsistencies between the residue contacts and distances observed in a structural model and those computationally predicted by methods such as AlphaFold2 has previously been established. It is particularly well suited to the detection of register errors. Importantly, this new approach is orthogonal to traditional methods based on stereochemistry or map–model agreement, and is resolution independent. Here, thousands of likely register errors are identified by scanning 3–5 Å resolution structures in the PDB. Unlike most methods, the application of this approach yields suggested corrections to the register of affected regions, which it is shown, even by limited implementation, lead to improved refinement statistics in the vast majority of cases. A few limitations and confounding factors such as fold-switching proteins are characterized, but this approach is expected to have broad application in spotting potential issues in current accessions and, through its implementation and distribution in CCP4, helping to ensure the accuracy of future depositions.




earn

POMFinder: identifying polyoxometallate cluster structures from pair distribution function data using explainable machine learning

Characterization of a material structure with pair distribution function (PDF) analysis typically involves refining a structure model against an experimental data set, but finding or constructing a suitable atomic model for PDF modelling can be an extremely labour-intensive task, requiring carefully browsing through large numbers of possible models. Presented here is POMFinder, a machine learning (ML) classifier that rapidly screens a database of structures, here polyoxometallate (POM) clusters, to identify candidate structures for PDF data modelling. The approach is shown to identify suitable POMs from experimental data, including in situ data collected with fast acquisition times. This automated approach has significant potential for identifying suitable models for structure refinement to extract quantitative structural parameters in materials chemistry research. POMFinder is open source and user friendly, making it accessible to those without prior ML knowledge. It is also demonstrated that POMFinder offers a promising modelling framework for combined modelling of multiple scattering techniques.




earn

The Pixel Anomaly Detection Tool: a user-friendly GUI for classifying detector frames using machine-learning approaches

Data collection at X-ray free electron lasers has particular experimental challenges, such as continuous sample delivery or the use of novel ultrafast high-dynamic-range gain-switching X-ray detectors. This can result in a multitude of data artefacts, which can be detrimental to accurately determining structure-factor amplitudes for serial crystallography or single-particle imaging experiments. Here, a new data-classification tool is reported that offers a variety of machine-learning algorithms to sort data trained either on manual data sorting by the user or by profile fitting the intensity distribution on the detector based on the experiment. This is integrated into an easy-to-use graphical user interface, specifically designed to support the detectors, file formats and software available at most X-ray free electron laser facilities. The highly modular design makes the tool easily expandable to comply with other X-ray sources and detectors, and the supervised learning approach enables even the novice user to sort data containing unwanted artefacts or perform routine data-analysis tasks such as hit finding during an experiment, without needing to write code.




earn

DLSIA: Deep Learning for Scientific Image Analysis

DLSIA (Deep Learning for Scientific Image Analysis) is a Python-based machine learning library that empowers scientists and researchers across diverse scientific domains with a range of customizable convolutional neural network (CNN) architectures for a wide variety of tasks in image analysis to be used in downstream data processing. DLSIA features easy-to-use architectures, such as autoencoders, tunable U-Nets and parameter-lean mixed-scale dense networks (MSDNets). Additionally, this article introduces sparse mixed-scale networks (SMSNets), generated using random graphs, sparse connections and dilated convolutions connecting different length scales. For verification, several DLSIA-instantiated networks and training scripts are employed in multiple applications, including inpainting for X-ray scattering data using U-Nets and MSDNets, segmenting 3D fibers in X-ray tomographic reconstructions of concrete using an ensemble of SMSNets, and leveraging autoencoder latent spaces for data compression and clustering. As experimental data continue to grow in scale and complexity, DLSIA provides accessible CNN construction and abstracts CNN complexities, allowing scientists to tailor their machine learning approaches, accelerate discoveries, foster interdisciplinary collaboration and advance research in scientific image analysis.




earn

Robust image descriptor for machine learning based data reduction in serial crystallography

Serial crystallography experiments at synchrotron and X-ray free-electron laser (XFEL) sources are producing crystallographic data sets of ever-increasing volume. While these experiments have large data sets and high-frame-rate detectors (around 3520 frames per second), only a small percentage of the data are useful for downstream analysis. Thus, an efficient and real-time data classification pipeline is essential to differentiate reliably between useful and non-useful images, typically known as `hit' and `miss', respectively, and keep only hit images on disk for further analysis such as peak finding and indexing. While feature-point extraction is a key component of modern approaches to image classification, existing approaches require computationally expensive patch preprocessing to handle perspective distortion. This paper proposes a pipeline to categorize the data, consisting of a real-time feature extraction algorithm called modified and parallelized FAST (MP-FAST), an image descriptor and a machine learning classifier. For parallelizing the primary operations of the proposed pipeline, central processing units, graphics processing units and field-programmable gate arrays are implemented and their performances compared. Finally, MP-FAST-based image classification is evaluated using a multi-layer perceptron on various data sets, including both synthetic and experimental data. This approach demonstrates superior performance compared with other feature extractors and classifiers.




earn

Bragg Spot Finder (BSF): a new machine-learning-aided approach to deal with spot finding for rapidly filtering diffraction pattern images

Macromolecular crystallography contributes significantly to understanding diseases and, more importantly, how to treat them by providing atomic resolution 3D structures of proteins. This is achieved by collecting X-ray diffraction images of protein crystals from important biological pathways. Spotfinders are used to detect the presence of crystals with usable data, and the spots from such crystals are the primary data used to solve the relevant structures. Having fast and accurate spot finding is essential, but recent advances in synchrotron beamlines used to generate X-ray diffraction images have brought us to the limits of what the best existing spotfinders can do. This bottleneck must be removed so spotfinder software can keep pace with the X-ray beamline hardware improvements and be able to see the weak or diffuse spots required to solve the most challenging problems encountered when working with diffraction images. In this paper, we first present Bragg Spot Detection (BSD), a large benchmark Bragg spot image dataset that contains 304 images with more than 66 000 spots. We then discuss the open source extensible U-Net-based spotfinder Bragg Spot Finder (BSF), with image pre-processing, a U-Net segmentation backbone, and post-processing that includes artifact removal and watershed segmentation. Finally, we perform experiments on the BSD benchmark and obtain results that are (in terms of accuracy) comparable to or better than those obtained with two popular spotfinder software packages (Dozor and DIALS), demonstrating that this is an appropriate framework to support future extensions and improvements.




earn

Patching-based deep-learning model for the inpainting of Bragg coherent diffraction patterns affected by detector gaps

A deep-learning algorithm is proposed for the inpainting of Bragg coherent diffraction imaging (BCDI) patterns affected by detector gaps. These regions of missing intensity can compromise the accuracy of reconstruction algorithms, inducing artefacts in the final result. It is thus desirable to restore the intensity in these regions in order to ensure more reliable reconstructions. The key aspect of the method lies in the choice of training the neural network with cropped sections of diffraction data and subsequently patching the predictions generated by the model along the gap, thus completing the full diffraction peak. This approach enables access to a greater amount of experimental data for training and offers the ability to average overlapping sections during patching. As a result, it produces robust and dependable predictions for experimental data arrays of any size. It is shown that the method is able to remove gap-induced artefacts on the reconstructed objects for both simulated and experimental data, which becomes essential in the case of high-resolution BCDI experiments.




earn

Rapid detection of rare events from in situ X-ray diffraction data using machine learning

High-energy X-ray diffraction methods can non-destructively map the 3D microstructure and associated attributes of metallic polycrystalline engineering materials in their bulk form. These methods are often combined with external stimuli such as thermo-mechanical loading to take snapshots of the evolving microstructure and attributes over time. However, the extreme data volumes and the high costs of traditional data acquisition and reduction approaches pose a barrier to quickly extracting actionable insights and improving the temporal resolution of these snapshots. This article presents a fully automated technique capable of rapidly detecting the onset of plasticity in high-energy X-ray microscopy data. The technique is computationally faster by at least 50 times than the traditional approaches and works for data sets that are up to nine times sparser than a full data set. This new technique leverages self-supervised image representation learning and clustering to transform massive data sets into compact, semantic-rich representations of visually salient characteristics (e.g. peak shapes). These characteristics can rapidly indicate anomalous events, such as changes in diffraction peak shapes. It is anticipated that this technique will provide just-in-time actionable information to drive smarter experiments that effectively deploy multi-modal X-ray diffraction methods spanning many decades of length scales.




earn

Ptychographic phase retrieval via a deep-learning-assisted iterative algorithm

Ptychography is a powerful computational imaging technique with microscopic imaging capability and adaptability to various specimens. To obtain an imaging result, it requires a phase-retrieval algorithm whose performance directly determines the imaging quality. Recently, deep neural network (DNN)-based phase retrieval has been proposed to improve the imaging quality from the ordinary model-based iterative algorithms. However, the DNN-based methods have some limitations because of the sensitivity to changes in experimental conditions and the difficulty of collecting enough measured specimen images for training the DNN. To overcome these limitations, a ptychographic phase-retrieval algorithm that combines model-based and DNN-based approaches is proposed. This method exploits a DNN-based denoiser to assist an iterative algorithm like ePIE in finding better reconstruction images. This combination of DNN and iterative algorithms allows the measurement model to be explicitly incorporated into the DNN-based approach, improving its robustness to changes in experimental conditions. Furthermore, to circumvent the difficulty of collecting the training data, it is proposed that the DNN-based denoiser be trained without using actual measured specimen images but using a formula-driven supervised approach that systemically generates synthetic images. In experiments using simulation based on a hard X-ray ptychographic measurement system, the imaging capability of the proposed method was evaluated by comparing it with ePIE and rPIE. These results demonstrated that the proposed method was able to reconstruct higher-spatial-resolution images with half the number of iterations required by ePIE and rPIE, even for data with low illumination intensity. Also, the proposed method was shown to be robust to its hyperparameters. In addition, the proposed method was applied to ptychographic datasets of a Simens star chart and ink toner particles measured at SPring-8 BL24XU, which confirmed that it can successfully reconstruct images from measurement scans with a lower overlap ratio of the illumination regions than is required by ePIE and rPIE.




earn

Co. Completes Earn-In to Form JV at Advanced Stage Uranium Project in Athabasca Basin

Source: Streetwise Reports 10/24/2024

Skyharbour Resources Ltd. (SYH:TSX.V; SYHBF:OTCQX; SC1P:FSE) has completed its earn-in requirements for a 51% interest at the Russell Lake Uranium Project in the central core of Canada's Eastern Athabasca Basin in Saskatchewan. This comes as the need for more net-zero power is sparking a rebirth of the nuclear industry.

Skyharbour Resources Ltd. (SYH:TSX.V; SYHBF:OTCQX; SC1P:FSE) announced that it has completed its earn-in requirements for a 51% interest at its co-flagship Russell Lake Uranium Project in the central core of Canada's Eastern Athabasca Basin in Saskatchewan.

The company and Rio Tinto have formed a joint venture (JV) to further explore the property, with Skyharbour holding 51% ownership interest and Rio Tinto holding 49%.

This summer, Skyharbour announced that in the first phase of drilling it had found what was historically the best uranium intercept mineralization at the project when hole RSL24-02 at the recently identified Fork Target returned a 2.5-meter-wide intercept of 0.721% U3O8 at a relatively shallow depth of 338.1 meters, including 2.99% U3O8 over 0.5 meters at 339.6 meters.

The second phase of drilling included three holes totaling 1,649 meters, with emphasis "at the MZE (M-Zone Extension) target, approximately 10 km northeast of the Fork target, identified prospective faulted graphitic gneiss accompanied by anomalous sandstone and basement geochemistry," Skyharbour said.

"The discovery of multi-percent, high-grade, sandstone-hosted uranium mineralization at a new target is a major breakthrough in the discovery process at Russell — something that hasn't been seen before at the project with the potential to quickly grow with more drilling," President and Chief Executive Officer Jordan Trimble said at the time.

ANT Survey, Upcoming Drilling Program

The company also announced on Thursday that it had completed an Ambient Noise Tomography (ANT) survey in preparation for further drilling at the Russell Lake Project, set to commence in the fall. The survey used Fleet Space Technologies' Exosphere technology to acquire 3D passive seismic velocity data over the highly prospective Grayling and Fork target areas, where previous drilling has intersected high-grade uranium mineralization.

"The ANT technology has been successfully employed in mapping significant sandstone and basement structures and associated alteration zones related to hydrothermal fluids pathways in the Athabasca Basin," the company said.

Results from the survey will be used to further refine drill targets for the upcoming drilling program. Skyharbour is fully funded and permitted for the follow-up fall drill campaign consisting of approximately 7,000 metres of drilling at its main Russell and Moore Projects, with 2,500 meters of drilling at Moore and 4,500 meters of drilling at Russell.

A Great Neighborhood

Russell Lake is a large, advanced-stage uranium exploration property totaling 73,294 hectares strategically located between Cameco's Key Lake and McArthur River projects and Denison's Wheeler River Project to the west, and Skyharbour's Moore project to the east.

"Skyharbour's acquisition of a majority interest in Russell Lake creates a large, nearly contiguous block of highly prospective uranium claims totaling 108,999 hectares between the Russell Lake and the Moore uranium projects," the company said.

Most of the historical exploration at Russell Lake was conducted before 2010, prior to the discovery of several major deposits in/around the Athabasca Basin, Skyharbour said.

Notable exploration targets on the property include the Grayling Zone, the M-Zone Extension target, the Little Man Lake target, the Christie Lake target, the Fox Lake Trail target and the newly identified Fork Zone target.

"More than 35 kilometers of largely untested prospective conductors in areas of low magnetic intensity also exist on the property," the company noted.

In an updated research note in July, Analyst Sid Rajeev of Fundamental Research Corp. wrote that Skyharbour "owns one of the largest portfolios among uranium juniors in the Athabasca Basin."

"Given the highly vulnerable uranium supply chain, we anticipate continued consolidation within the sector," wrote Rajeev, who rated the stock a Buy with a fair value estimate of CA$1.21 per share. "Additionally, the rapidly growing demand for energy from the AI (artificial intelligence) industry is likely to accelerate the adoption of nuclear power, which should, in turn, spotlight uranium juniors in the coming months."

The Catalyst: Uranium is 'BACK!'

The growth of AI, new data centers, electric vehicle (EV) adoption, and the need for more net-zero power means more nuclear energy and the uranium needed to fuel it.

Uranium prices are expected to move higher by the end of this quarter, when Trading Economics' global macro models and analyses forecast uranium to trade at US$84.15 per pound, Nuclear Newswire reported on Oct. 3. In another year, the site estimates that the metal will trade at US$91.80 per pound.

Just last month, Microsoft Corp. (MSFT:NASDAQ) announced a deal with Constellation Energy Group (CEG:NYSE) to restart and buy all of the power from one of the shut-down reactors at its infamous Three Mile Island plant in Pennsylvania and the Biden administration also announced a plan to restart the Palisades plant in Michigan.

Chris Temple, publisher of The National Investor, recently noted that with the Three Mile Island deal, "uranium/nuclear power is BACK!"[OWNERSHIP_CHART-6026]

"I've watched as the news has continued to point to uranium being in the early innings of this new bull market," Temple wrote. "Yet the markets have been yawning . . . until now."

Ownership and Share Structure

Management, insiders, and close business associates own approximately 5% of Skyharbour.

According to Reuters, President and CEO Trimble owns 1.6%, and Director David Cates owns 0.70%.

Institutional, corporate, and strategic investors own approximately 55% of the company. Denison Mines owns 6.3%, Rio Tinto owns 2.0%, Extract Advisors LLC owns 9%, Alps Advisors Inc. owns 9.91%, Mirae Asset Global Investments (U.S.A) L.L.C. owns 6.29%, Sprott Asset Management L.P. owns 1.5%, and Incrementum AG owns 1.18%, Reuters reported.

There are 182.53 million shares outstanding with 178 million free float traded shares, while the company has a market cap of CA$88.53 million and trades in a 52-week range of CA$0.31 and CA$0.64.

Sign up for our FREE newsletter at: www.streetwisereports.com/get-news

Important Disclosures:

  1. Skyharbour Resources Ltd. is a billboard sponsor of Streetwise Reports and pays SWR a monthly sponsorship fee between US$4,000 and US$5,000.
  2. Steve Sobek wrote this article for Streetwise Reports LLC and provides services to Streetwise Reports as an employee.
  3. This article does not constitute investment advice and is not a solicitation for any investment. Streetwise Reports does not render general or specific investment advice and the information on Streetwise Reports should not be considered a recommendation to buy or sell any security. Each reader is encouraged to consult with his or her personal financial adviser and perform their own comprehensive investment research. By opening this page, each reader accepts and agrees to Streetwise Reports' terms of use and full legal disclaimer. Streetwise Reports does not endorse or recommend the business, products, services or securities of any company.

For additional disclosures, please click here.

( Companies Mentioned: SYH:TSX.V; SYHBF:OTCQX; SC1P:FSE, )




earn

Co. Completes Earn-In to Form JV at Advanced Stage Uranium Project in Athabasca Basin

Skyharbour Resources Ltd. (SYH:TSX.V; SYHBF:OTCQX; SC1P:FSE) has completed its earn-in requirements for a 51% interest at the Russell Lake Uranium Project in the central core of Canada's Eastern Athabasca Basin in Saskatchewan. This comes as the need for more net-zero power is sparking a rebirth of the nuclear industry.



  • SYH:TSX.V; SYHBF:OTCQX; SC1P:FSE

earn

Community leaders learn about new child safety initiatives.

Approximately 100 community leaders learned about two programs designed to protect area children at the Children's Advocacy and Protection Center's second annual Children's Breakfast.




earn

Assistant County Manager Dewey Harris earns international Credentialed Manager distinction.

Catawba County Assistant County Manager Dewey Harris has earned the International City/County Management Association's (ICMA) Credentialed Manager designation. Established in 2002, the ICMA Credentialed Manager program recognizes professional government managers whom the ICMA certifies as having a "commitment to continuous learning and professional development".




earn

Public Health earns reaccreditation from North Carolina Local Health Department Accreditation Board.

Catawba County Public Health has earned reaccreditation from the North Carolina Local Health Department Accreditation Board.




earn

Learning About Evolution Critical for Understanding Science

Many public school students receive little or no exposure to the theory of evolution, the most important concept in understanding biology, says a new guidebook from the National Academy of Sciences (NAS).




earn

Adding It Up - Helping Children Learn Mathematics

American students progress toward proficiency in mathematics requires major changes in instruction, curricula, and assessment in the nations schools, says a new report from the National Research Council of the National Academies.




earn

New Report on Science Learning at Museums, Zoos, Other Informal Settings

Each year, tens of millions of Americans, young and old, choose to learn about science in informal ways -- by visiting museums and aquariums, attending after-school programs, pursuing personal hobbies, and watching TV documentaries, for example.




earn

Transferable Knowledge and Skills Key to Success in Education and Work - Report Calls for Efforts to Incorporate Deeper Learning Into Curriculum

Educational and business leaders want todays students both to master school subjects and to excel in areas such as problem solving, critical thinking, and communication




earn

K-12 Science Teachers Need Sustained Professional Learning Opportunities to Teach New Science Standards, Report Says

As researchers’ and teachers’ understanding of how best to learn and teach science evolves and curricula are redesigned, many teachers are left without the experience needed to enhance the science and engineering courses they teach, says a new report from the National Academies of Sciences, Engineering, and Medicine.




earn

Promoting the Educational Success of Children and Youth Learning English - New Report

Despite their potential, many English learners (ELs) -- who account for more than 9 percent of K-12 enrollment in the U.S. -- lag behind their English-speaking monolingual peers in educational achievement, in part because schools do not provide adequate instruction and social-emotional support to acquire English proficiency or access to academic subjects at the appropriate grade level, says a new report from the National Academies of Sciences, Engineering, and Medicine.




earn

United States Skilled Technical Workforce Is Inadequate to Compete in Coming Decades - Actions Needed to Improve Education, Training, and Lifelong Learning of Workers

Policymakers, employers, and educational institutions should take steps to strengthen the nation’s skilled technical workforce, says a new report from the National Academies of Sciences, Engineering, and Medicine.




earn

Learning Is a Complex and Active Process That Occurs Throughout the Life Span, New Report Says

A new report from the National Academies of Sciences, Engineering, and Medicine highlights the dynamic process of learning throughout the life span and identifies frontiers in which more research is needed to pursue an even deeper understanding of human learning.




earn

New Report Provides Guidance on How to Improve Learning Outcomes in STEM for English Learners

A shift is needed in how science, technology, engineering, and mathematics (STEM) subjects are taught to students in grades K-12 who are learning English, says a new report from the National Academies of Sciences, Engineering, and Medicine.




earn

New Report Says ‘Citizen Science’ Can Support Both Science Learning and Research Goals

Scientific research that involves nonscientists contributing to research processes – also known as ‘citizen science’ – supports participants’ learning, engages the public in science, contributes to community scientific literacy, and can serve as a valuable tool to facilitate larger scale research, says a new report from the National Academies of Sciences, Engineering, and Medicine.




earn

Investigation and Design Can Improve Student Learning in Science and Engineering - Changes to Instructional Approaches Will Require Significant Effort

Centering science instruction around investigation and design can improve learning in middle and high schools and help students make sense of phenomena in the world around them.




earn

To Ensure High-Quality Patient Care, the Health Care System Must Address Clinician Burnout Tied to Work and Learning Environments, Administrative Requirements

Between one-third and one-half of U.S. clinicians experience burnout and addressing the epidemic requires systemic changes by health care organizations, educational institutions, and all levels of government, says a new report from the National Academy of Medicine.




earn

New Report Recommends Ways to Strengthen the Resilience of Supply Chains After Hurricanes, Based on Lessons Learned From Hurricanes Harvey, Irma, Maria

A new report from the National Academies of Sciences, Engineering, and Medicine recommends ways to make supply chains -- the systems that provide populations with critical goods and services, such as food and water, gasoline, and pharmaceuticals and medical supplies – more resilient in the face of hurricanes and other disasters, drawing upon lessons learned from the 2017 hurricanes Harvey, Irma, and Maria.




earn

Colleges and Universities Should Strengthen Sustainability Education Programs by Increasing Interdisciplinarity, Fostering Experiential Learning, and Incorporating Diversity, Equity, and Inclusion

Colleges and universities should embrace sustainability education as a vital field that requires tailored educational experiences delivered through courses, majors, minors, and research and graduate degrees, says a new report from the National Academies of Sciences, Engineering, and Medicine.




earn

Designing Learning Experiences with Attention to Students’ Backgrounds Can Attract Underrepresented Groups to Computing

Learning experiences in computing that are designed with attention to K-12 students’ interests, identities, and backgrounds may attract underrepresented groups to computing better than learning experiences that mimic current professional computing practices and culture do, says a new report from the National Academies of Sciences, Engineering, and Medicine.




earn

Fighting Vaccine Hesitancy - What Can We Learn From Social Science

As COVID-19 vaccination programs across the country transition from meeting urgent demand to reaching people who are less eager to get the shot, leaders are looking for new vaccine communications strategies.




earn

Building cyber-resilience: Lessons learned from the CrowdStrike incident

Organizations, including those that weren’t struck by the CrowdStrike incident, should resist the temptation to attribute the IT meltdown to exceptional circumstances




earn

Learning programming through game building

Jiro's Pick this week is AstroVolley Courseware by Paul Huxel.Back in my undergraduate studies (many, many years ago), I took a Pascal programming course, and it was the first official programming... read more >>




earn

Is 'learn to code' just empty advice now that AI does the heavy lifting? Here’s Google’s take

Google's head of research, Yossi Matias, emphasizes the enduring importance of coding skills in an AI-driven world. While acknowledging AI's growing role in software development, Matias argues that basic coding knowledge is crucial for understanding and leveraging AI's potential. He compares coding to math, suggesting that both are fundamental for navigating an increasingly tech-reliant society.




earn

Stanford scientists combine satellite data and machine learning to map poverty

One of the biggest challenges in providing relief to people living in poverty is locating them. The availability of accurate and reliable information on the location of impoverished zones is surprisingly lacking for much of the world, particularly on the African continent. Aid groups and other international organizations often fill in the gaps with door-to-door surveys, but these can be expensive and time-consuming to conduct.

read more



  • Mathematics & Economics