experiment

Experiments illuminate key component of plants' immune systems




experiment

Perplexity AI Begins Experimenting With Ads on Its Search Platform

Perplexity AI, the company behind the eponymous artificial intelligence (AI) search platform, announced on Tuesday that it will begin experimenting with ads this week. An earlier report in August highlighted the company’s intentions of incorporating ads by the end of the year to boost its revenue. The AI firm highlighted that ads will be shown in a format that does not clutter or ruins the user experience of the platform.




experiment

GREG GUTFELD: We've got one shot to save the greatest experiment in government of all time

'Gutfeld!' panelists react to the latest round of Cabinet picks in President-elect Trump's upcoming administration.



  • 36150cdd-e2be-5ebb-8425-77f3ffe6a38a
  • fnc
  • Fox News
  • fox-news/media/fox-news-flash
  • fox-news/media
  • fox-news/shows/gutfeld
  • fox-news/opinion
  • article

experiment

Bandai Namco Filmworks Releases 2nd Aura Battler Dunbine 'Experimental Video' With New Theme Song by MindaRyn

MindaRyn performs "Remaining Story" theme song, specifically written for video




experiment

Data-driven discovery of carbonyl organic electrode molecules: machine learning and experiment

J. Mater. Chem. A, 2024, Advance Article
DOI: 10.1039/D4TA00136B, Paper
Jiayi Du, Jun Guo, Qiqi Sun, Wei Liu, Tong Liu, Gang Huang, Xinbo Zhang
In this work, a universal strategy for the identification of high-performance OEMs for LIBs has been illustrated. The predicted molecule, naphthalene-1,4,5,8-tetraone, exhibits excellent electrochemical performance in terms of capacity and lifetime.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




experiment

Highly Boosted Energy Storage Performance of Few-Layered MoS2 Utilized by Improved Electrode Fabrication: Experimental and Theoretical Studies

J. Mater. Chem. A, 2024, Accepted Manuscript
DOI: 10.1039/D4TA01713G, Paper
Zeyad M. Abdulhamid, Abhishek C. Lokhande, Adewale Hammed Pasanaje, Daniel S Choi, Nirpendra Singh, Kyriaki Polychronopoulou, Dalaver H. Anjum
Few-layer two-dimensional (2D) molybdenum disulfide (MoS2) has great potential in designing high-performance supercapacitors due to its high theoretical specific capacity, tunable bandgap, and fascinating 2D layered structure. However, the lack...
The content of this RSS Feed (c) The Royal Society of Chemistry




experiment

Experimental Fortification Lab Centre opened at KLE




experiment

Clumped isotope temperature calibration for calcite: bridging theory and experimentation

Jautzy, J J; Savard, M M; Dhillon, R S; Bernasconi, S M; Smirnoff, A. Geochemical Perspectives Letters vol. 14, 2020 p. 36-41, https://doi.org/10.7185/geochemlet.2021
<a href="https://geoscan.nrcan.gc.ca/images/geoscan/20190528.jpg"><img src="https://geoscan.nrcan.gc.ca/images/geoscan/20190528.jpg" title="Geochemical Perspectives Letters vol. 14, 2020 p. 36-41, https://doi.org/10.7185/geochemlet.2021" height="150" border="1" /></a>




experiment

Games, experiments and Bible stories

OM uses an educational programme called KidsGames to share Bible stories in a public school.




experiment

May 26 2009 Long Lost Radio History Image - Radio Rhema Experiments

Over the Christmas - New Year period of 1976-77, Radio Rhema broadcast an experimental 10 day short-term AM program from Ferrymead Historic Park, Christchurch, New Zealand...




experiment

another perlin noise experiment

I decided to work on a javascript canvas experiment where I can drag the mouse and scroll within a map. I got a simple "map" drawing colored squares that can be dragged around. I found a perlin noise library, and made the map look like wavy colors. It was pretty slow at that point, so I rewrote the drawing code so it writes directly into the image data. Now I have a smoothly scrolling wavy thing, but I'm not sure what I'm going to use it for. sproutworks.com/demos/perlin




experiment

¿Qué es un Protoboard? (Tableta de experimentación)

Categoría: Electrónica y Electricidad:Electrónica:Componentes
Aprende a utilizar la tableta de experimentación protoboard o breadboard.




experiment

10 Ingenious Tech Experiments That Think Outside the Box

Innovation in technology often means thinking outside the box. This list highlights some of the most ingenious tech experiments that push boundaries and showcase human creativity. These projects aren’t just theoretical; they have real-world applications, making a big impact across various industries. From energy efficiency to data storage, these innovations show how inventive thinking can […]

The post 10 Ingenious Tech Experiments That Think Outside the Box appeared first on Listverse.




experiment

Architect Benny Kuriakose experiments with light and design to help autistic kids in Chennai

Autism Awareness Month: At the Sankalp headquarters in Kolappancheri, architect Benny Kuriakose has experimented with natural lighting, curved pathways and verandahs as buffer zones




experiment

A new bakery in Visakhapatnam experiments with preservative-free artisanal breads

Breadway in Visakhapatnam focuses on offering authentic flavours through artisanal breads and bakes




experiment

Can LLMs Provide References for Their Work? An Experiment

Summary: We know that LLMs hallucinate. And we know that we can overcome some of that problem with better prompts. But can LLMs provide references for their work? I conducted an experiment.

Recently my friend Scott Mace asked this on X in response to a post about the evolution of software and how LLMs will likely follow that same trajectory:

Yes, but how trustworthy will it be? Can I at least see the links it used to create its response?
From X
Referenced 2024-05-21T18:09:06-0400

The short answer is "no", but the real answer is a bit more nuanced because of how LLMs like ChatGPT work. First the short answer: LLMs don't understand what they're saying so they can't reference the material that it used to create a response. They don't work like that. They don't go out, do a bunch of research and then create a response. Instead lots (and I mean LOTS) of material is fed into the program that creates the model, a multi-dimensional matrix of word orders. Then when you ask a question ("give it prompt" in LLM-speak), it processes the question and generates text based on the probabilities of words in its model1. There's no way it can satisfy Scott directly because the model doesn't—can't, as far as I know—include the references that it used to create the text because it's generating it on the fly from material that has been thoroughly blended together.

But, in fairness, that's not what people do most of the time either. If you asked me over lunch or at the end of a tech presentation to describe zero trust and its pros and cons, I'd start talking. I wouldn't start looking up things on the web and somehow weave them into text complete with references. I'd go look for references if you asked me to back up something I said. So, let's try that with ChatGPT 4o. I gave it the following prompt:

You are a writer for a website focusing on technology. Your specialty is security. Write a 150 word explanation of zero trust with three pros and three cons.

ChaptGPT answered with this text:

Zero Trust is a security model based on the principle of "never trust, always verify." Unlike traditional security models that assume everything inside the network is safe, Zero Trust assumes that threats can exist both inside and outside the network. It requires strict identity verification for every user and device attempting to access resources, regardless of their location.

Pros:

  1. Enhanced Security: By continuously verifying users and devices, Zero Trust reduces the risk of breaches and unauthorized access.
  2. Minimized Attack Surface: It limits access to only what is necessary, reducing potential entry points for attackers.
  3. Improved Compliance: Helps meet regulatory requirements by ensuring strict access controls and logging all access attempts.

Cons:

  1. Complex Implementation: Deploying Zero Trust can be complex and time-consuming, requiring a significant overhaul of existing systems and processes.
  2. Increased Costs: The initial investment in technology and training can be high.
  3. User Experience Impact: Continuous verification can lead to friction and inconvenience for users if not implemented smoothly.

Zero Trust is essential for modern cybersecurity, but it requires careful planning and resources to implement effectively.

That's pretty good. But what if we need some references to other material to back up the pros and cons? I asked ChatGPT to supply them:

Now repeat that task, but this time include references as hyperlinks to back up each pro and con.

ChaptGPT answered with this text:

Zero Trust is a security model based on the principle of "never trust, always verify." Unlike traditional security models that assume everything inside the network is safe, Zero Trust assumes that threats can exist both inside and outside the network. It requires strict identity verification for every user and device attempting to access resources, regardless of their location.

Pros:

  1. Enhanced Security: By continuously verifying users and devices, Zero Trust reduces the risk of breaches and unauthorized access (CSO Online).
  2. Minimized Attack Surface: It limits access to only what is necessary, reducing potential entry points for attackers (NIST).
  3. Improved Compliance: Helps meet regulatory requirements by ensuring strict access controls and logging all access attempts (Gartner).

Cons:

  1. Complex Implementation: Deploying Zero Trust can be complex and time-consuming, requiring a significant overhaul of existing systems and processes (Forrester).
  2. Increased Costs: The initial investment in technology and training can be high (ZDNet).
  3. User Experience Impact: Continuous verification can lead to friction and inconvenience for users if not implemented smoothly (TechRepublic).

Zero Trust is essential for modern cybersecurity, but it requires careful planning and resources to implement effectively.

That looks great! Nice references for each bullet. There's just one problem. All the links, except the NIST reference, are dead. I don't know if they're wrong, made up, or stale. I tried finding content on a few of the sites by searching and came up dry. As I heard Johannes Ernst say once, ChatGPT is a "large language model" not a "large fact model" and this bears that out.

Still, asking for references or evidence for statements is something people are going to want. So, I tried a different tack. This time I asked this:

Why do you say complex implementation is a problem for zero trust? Can you give me a few references on that I can read myself?

ChatGPT responded with this:

Implementing a Zero Trust security model can be challenging for several reasons:

  1. Overall Complexity: Zero Trust requires a comprehensive understanding of your network's intricacies, including users, devices, applications, and data. This can involve significant changes to existing infrastructure, which can be disruptive and costly. Organizations need to be prepared for the extensive planning and resources required for successful implementation​ (Help Net Security)​​ (Enterprise Networking Planet​).
  2. Legacy Systems: Many organizations have legacy systems that are not designed to operate within a Zero Trust framework. These systems might require different security measures or even complete replacement, adding to the complexity and cost (Help Net Security)​​ (Enterprise Networking Planet​)
  3. Cultural Shift: Zero Trust is not just a technological shift but also a cultural one. It requires continuous verification and strict access controls, which can lead to user frustration and resistance if not managed properly. Ensuring user buy-in and minimizing disruption to workflows through training and communication is essential (Deloitte United States)​​ (Guidehouse).
  4. Resource Intensive: Implementing and maintaining Zero Trust can put a strain on IT resources due to the need for continuous monitoring and management. The need for advanced security tools and ongoing updates further adds to the resource demands (Help Net Security)​​ (Enterprise Networking Planet​).
  5. Stakeholder Alignment: Achieving buy-in from all stakeholders and aligning them with the Zero Trust principles is critical. This requires a coordinated effort to educate stakeholders, prioritize gaps, and ensure consistent understanding and application of Zero Trust principles across the organization (Deloitte United States)​​ (Guidehouse).

These challenges highlight the importance of a phased and well-planned approach to Zero Trust implementation, considering both technological and human factors. For more detailed insights, you can refer to articles from Help Net Security and Guidehouse.

Much better. The links in the bullets all work, but the ones at the end don't. The articles it references aren’t very detailed, so I wouldn’t say they’re stellar references, but at least there’s a pointer to outside material that discusses similar challenges. So, partial success.

I've found that when you're using ChatGPT to do research, treating it like a librarian and asking questions that follow up are the best way to gain confidence that the material is correct and find the supporting material, just as I did above. I'm sure you could wrap additional helper functions around an LLM to make getting references easier. But for now interrogation through chat is the best way to get comfortable with the results.


Notes

  1. Yeah, this is really simplified and thus wrong, but it's directionally correct. For a better explanation, see What Is ChatGPT Doing … and Why Does It Work?.

Photo Credit: Older Man Talking to Robot Librarian from DALL-E (public domain) Prompt: draw me a rectangular picture of an older man talking with a robot librarian. Use earth tones.

Tags:






experiment

An Experiment in Ceramics

Fr. Pat reflects upon the treasure of which Paul speaks in 2 Corinthians 4:6-7; what that treasure is, what is needed to take possession of it, and that in which it is contained.




experiment

Launch: searchmash, an experimental site started by Google

Uses Ajax and some other web2.0-ish features.




experiment

Early Results of Experiments with Responsive Open Learning Environments

Responsive open learning environments (ROLEs) are the next generation of personal learning environments (PLEs). While PLEs rely on the simple aggregation of existing content and services mainly using Web 2.0 technologies, ROLEs are transforming lifelong learning by introducing a new infrastructure on a global scale while dealing with existing learning management systems, institutions, and technologies. The requirements engineering process in highly populated test-beds is as important as the technology development. In this paper, we will describe first experiences deploying ROLEs at two higher learning institutions in very different cultural settings. The Shanghai Jiao Tong University in China and at the “Center for Learning and Knowledge Management and Department of Information Management in Mechanical Engineering” (ZLW/IMA) at RWTH Aachen University have exposed ROLEs to theirs students in already established courses. The results demonstrated to readiness of the technology for large-scale trials and the benefits for the students leading to new insights in the design of ROLEs also for more informal learning situations.




experiment

Pattern of Plagiarism in Novice Students’ Generated Programs: An Experimental Approach




experiment

Sheer Curation of Experiments: Data, Process, Provenance

This paper describes an environment for the “sheer curation” of the experimental data of a group of researchers in the fields of biophysics and structural biology. The approach involves embedding data capture and interpretation within researchers' working practices, so that it is automatic and invisible to the researcher. The environment does not capture just the individual datasets generated by an experiment, but the entire workflow that represent the “story” of the experiment, including intermediate files and provenance metadata, so as to support the verification and reproduction of published results. As the curation environment is decoupled from the researchers’ processing environment, the provenance is inferred from a variety of domain-specific contextual information, using software that implements the knowledge and expertise of the researchers. We also present an approach to publishing the data files and their provenance according to linked data principles by using OAI-ORE (Open Archives Initiative Object Reuse and Exchange) and OPMV.




experiment

Openness to Experience: An HCI Experiment




experiment

Experimenting with eXtreme Teaching Method – Assessing Students’ and Teachers’ Experiences




experiment

Student Preferences and Performance in Online and Face-to-Face Classes Using Myers-Briggs Indicator: A Longitudinal Quasi-Experimental Study

This longitudinal, quasi-experimental study investigated students’ cognitive personality type using the Myers-Briggs personality Type Indicator (MBTI) in Internet-based Online and Face-to-Face (F2F) modalities. A total of 1154 students enrolled in 28 Online and 32 F2F sections taught concurrently over a period of fourteen years. The study measured whether the sample is similar to the national average percentage frequency of all 16 different personality types; whether specific personality type students preferred a specific modality of instructions and if this preference changed over time; whether learning occurred in both class modalities; and whether specific personality type students learned more from a specific modality. Data was analyzed using regression, t-test, frequency, and Chi-Squared. The study concluded that data used in the study was similar to the national statistics; that no major differences in preference occurred over time; and that learning did occur in all modalities, with more statistically significant learning found in the Online modality versus F2F for Sensing, Thinking, and Perceiving types. Finally, Sensing and Thinking (ST) and Sensing and Perceiving (SP) group types learned significantly more in Online modality versus F2F.




experiment

A Guide for Novice Researchers on Experimental and Quasi-Experimental Studies in Information Systems Research




experiment

Detecting Data Errors in Organizational Settings: Examining the Generalizability of Experimental Findings




experiment

User Perceptions of Aesthetic Visual Design Variables within the Informing Environment: A Web-Based Experiment




experiment

Case Study of a Complex Informing System: Joint Interagency Field Experimentation (JIFX)

The Joint Interagency Field Experimentation (JIFX) event, organized by the Naval Postgraduate School (NPS), is conducted 3-4 times a year at various locations. The four day event can be characterized as an informing system specifically designed to facilitate structured and unstructured communications between a variety of parties—e.g., software developers, inventors, military and civilian users of various technologies, academics, and agencies responsible for identifying and procuring technology solutions—that frequently are constrained in their informing activities in more restrictive venues. Over the course of the event, participants may observe technology demonstrations, obtain feedback from potential users, acquire new ideas about their technologies might be employed and, perhaps most significantly, engage in ad hoc collaborations with other participants. The present paper describes an exploratory case research study that was conducted over a one year period and involved both direct observation of the event and follow-up interviews with 49 past participants in the event. The goal of the research was to assess the nature of participant-impact resulting from attending JIFX and to consider the consistency of the findings with the predictions of various theoretical frameworks used in informing science. The results suggest that participants perceived that the event provided significant value from three principal sources: discovery, interaction with potential clients (users) of the technologies involved, and networking with other participants. These findings were largely consistent with what could be expected from informing under conditions of high complexity; because value generally derives from combinations of attributes rather than from the sum of individual attributes, we would expect that overall value from informing activities will be perceived even though estimates of the incremental value of that informing cannot be made.




experiment

STORIES ABOUT VALUES AND VALUABLE STORIES: A FIELD EXPERIMENT OF THE POWER OF NARRATIVES TO SHAPE NEWCOMERS' ACTIONS

This study draws on social identity theories of behavioral contagion and research concerning narratives in organizations to present and test a framework for understanding how narratives embed values in organizational newcomers' actions. Employing a field experiment using 632 newly-hired employees in a large IT firm that prioritizes self-transcendent values, this study explores how narratives varying in terms of the organizational level of main characters and the values-upholding or values-violating behaviors of those characters influence newcomers' tendencies to engage in behaviors that uphold or deviate from the values. Results indicate that stories about low-level organizational characters engaging in values-upholding behaviors are more positively associated with self-transcendent, helping behaviors and negatively associated with deviant behaviors, than are similar stories about high-level members of the organization. Stories in which high-level members of the organization violate values are negatively related to newcomers' engagement in both helping and deviance more strongly than are values-violating stories about lower-level members. Content analyses of the stories suggest that they convey values in different and potentially important ways. Implications, future directions, and limitations are discussed.




experiment

Walking collaboratories: experimentations with climate and waste pedagogies.

Children's Geographies; 05/10/2022
(AN 156804675); ISSN: 14733285
Academic Search Premier





experiment

Preparing research samples for safe arrival at centers and facilities: recipes for successful experiments

Preparation of biomacromolecules for structural biology studies is a complex and time-consuming process. The goal is to produce a highly concentrated, highly pure product that is often shipped to large facilities with tools to prepare the samples for crystallization trials or for measurements at synchrotrons and cryoEM centers. The aim of this article is to provide guidance and to discuss general considerations for shipping biomacromolecular samples. Details are also provided about shipping samples for specific experiment types, including solution- and cryogenic-based techniques. These guidelines are provided with the hope that the time and energy invested in sample preparation is not lost due to shipping logistics.




experiment

A method with ultra-high angular resolution for X-ray diffraction experiments

In X-ray diffraction measurements, the angular resolution has a detection limit due to the receiving size of the detector. In many cases this detection limit is too large and must be breached to obtain the desired information. A novel method is proposed here by making the detector simultaneously measuring and moving. Using the deconvolution algorithm to remove the convolution effect, the pixel size limitation is finally broken. The algorithm used is not a common one, and suppresses signals at high frequencies, ensuring the reliability of the peak shape after restoration. The feasibility of this method is verified by successfully measuring the crystal truncation rod signal of SrTiO3 single crystal, and the resolution is nearly ten times higher than that of a single pixel. Moreover, this method greatly reduces the noise and improves the signal-to-noise ratio.




experiment

xrdPlanner: exploring area detector geometries for powder diffraction and total scattering experiments

xrdPlanner is a software package designed to aid in the planning and preparation of powder X-ray diffraction and total scattering beam times at synchrotron facilities. Many modern beamlines provide a flexible experimental setup and may have several different detectors available. In combination with a range of available X-ray energies, it often makes it difficult for the user to explore the available parameter space relevant for a given experiment prior to the scheduled beam time. xrdPlanner was developed to provide a fast and straightforward tool that allows users to visualize the accessible part of reciprocal space of their experiment at a given combination of photon energy and detector geometry. To plan and communicate the necessary geometry not only saves time but also helps the beamline staff to prepare and accommodate for an experiment. The program is tailored toward powder X-ray diffraction and total scattering experiments but may also be useful for other experiments that rely on an area detector and for which detector placement and achievable momentum-transfer range are important experimental parameters.




experiment

Determination of optimal experimental conditions for accurate 3D reconstruction of the magnetization vector via XMCD-PEEM

This work presents a detailed analysis of the performance of X-ray magnetic circular dichroism photoemission electron microscopy (XMCD-PEEM) as a tool for vector reconstruction of magnetization. For this, 360° domain wall ring structures which form in a synthetic antiferromagnet are chosen as the model to conduct the quantitative analysis. An assessment is made of how the quality of the results is affected depending on the number of projections that are involved in the reconstruction process, as well as their angular distribution. For this a self-consistent error metric is developed which allows an estimation of the optimum azimuthal rotation angular range and number of projections. This work thus proposes XMCD-PEEM as a powerful tool for vector imaging of complex 3D magnetic structures.




experiment

Efficient end-to-end simulation of time-dependent coherent X-ray scattering experiments

Physical optics simulations for beamlines and experiments allow users to test experiment feasibility and optimize beamline settings ahead of beam time in order to optimize valuable beam time at synchrotron light sources like NSLS-II. Further, such simulations also help to develop and test experimental data processing methods and software in advance. The Synchrotron Radiation Workshop (SRW) software package supports such complex simulations. We demonstrate how recent developments in SRW significantly improve the efficiency of physical optics simulations, such as end-to-end simulations of time-dependent X-ray photon correlation spectroscopy experiments with partially coherent undulator radiation (UR). The molecular dynamics simulation code LAMMPS was chosen to model the sample: a solution of silica nanoparticles in water at room temperature. Real-space distributions of nanoparticles produced by LAMMPS were imported into SRW and used to simulate scattering patterns of partially coherent hard X-ray UR from such a sample at the detector. The partially coherent UR illuminating the sample can be represented by a set of orthogonal coherent modes obtained by simulation of emission and propagation of this radiation through the coherent hard X-ray (CHX) scattering beamline followed by a coherent-mode decomposition. GPU acceleration is added for several key functions of SRW used in propagation from sample to detector, further improving the speed of the calculations. The accuracy of this simulation is benchmarked by comparison with experimental data.




experiment

A new experimental setup for combined fast differential scanning calorimetry and X-ray photon correlation spectroscopy

Synchrotron-radiation-based techniques are a powerful tool for the investigation of materials. In particular, the availability of highly brilliant sources has opened the possibility to develop techniques sensitive to dynamics at the atomic scale such as X-ray photon correlation spectroscopy (XPCS). XPCS is particularly relevant in the study of glasses, which have been often investigated at the macroscopic scale by, for example, differential scanning calorimetry. Here, we show how to adapt a Flash calorimeter to combine XPCS and calorimetric scans. This setup paves the way to novel experiments requiring dynamical and thermodynamic information, ranging from the study of the crystallization kinetics to the study of the glass transition in systems that can be vitrified thanks to the high cooling rates reachable with an ultrafast calorimeter.




experiment

A distributed data processing scheme based on Hadoop for synchrotron radiation experiments

With the development of synchrotron radiation sources and high-frame-rate detectors, the amount of experimental data collected at synchrotron radiation beamlines has increased exponentially. As a result, data processing for synchrotron radiation experiments has entered the era of big data. It is becoming increasingly important for beamlines to have the capability to process large-scale data in parallel to keep up with the rapid growth of data. Currently, there is no set of data processing solutions based on the big data technology framework for beamlines. Apache Hadoop is a widely used distributed system architecture for solving the problem of massive data storage and computation. This paper presents a set of distributed data processing schemes for beamlines with experimental data using Hadoop. The Hadoop Distributed File System is utilized as the distributed file storage system, and Hadoop YARN serves as the resource scheduler for the distributed computing cluster. A distributed data processing pipeline that can carry out massively parallel computation is designed and developed using Hadoop Spark. The entire data processing platform adopts a distributed microservice architecture, which makes the system easy to expand, reduces module coupling and improves reliability.




experiment

First X-ray spectral ptychography and resonant ptychographic computed tomography experiments at the SWING beamline from Synchrotron SOLEIL

X-ray ptychography and ptychographic computed tomography have seen a rapid rise since the advent of fourth-generation synchrotrons with a high degree of coherent radiation. In addition to quantitative multiscale structural analysis, ptychography with spectral capabilities has been developed, allowing for spatial-localized multiscale structural and spectral information of samples. The SWING beamline of Synchrotron SOLEIL has recently developed a nanoprobe setup where the endstation's first spectral and resonant ptychographic measurements have been successfully conducted. A metallic nickel wire sample was measured using 2D spectral ptychography in XANES mode and resonant ptychographic tomography. From the 2D spectral ptychography measurements, the spectra of the components of the sample's complex-valued refractive index, δ and β, were extracted, integrated along the sample thickness. By performing resonance ptychographic tomography at two photon energies, 3D maps of the refractive index decrement, δ, were obtained at the Ni K-edge energy and another energy above the edge. These maps allowed the detection of impurities in the Ni wire. The significance of accounting for the atomic scattering factor is demonstrated in the calculation of electron density near a resonance through the use of the δ values. These results indicate that at the SWING beamline it is possible to conduct state-of-the-art spectral and resonant ptychography experiments using the nanoprobe setup.




experiment

Investigation of fast and efficient lossless compression algorithms for macromolecular crystallography experiments

Structural biology experiments benefit significantly from state-of-the-art synchrotron data collection. One can acquire macromolecular crystallography (MX) diffraction data on large-area photon-counting pixel-array detectors at framing rates exceeding 1000 frames per second, using 200 Gbps network connectivity, or higher when available. In extreme cases this represents a raw data throughput of about 25 GB s−1, which is nearly impossible to deliver at reasonable cost without compression. Our field has used lossless compression for decades to make such data collection manageable. Many MX beamlines are now fitted with DECTRIS Eiger detectors, all of which are delivered with optimized compression algorithms by default, and they perform well with current framing rates and typical diffraction data. However, better lossless compression algorithms have been developed and are now available to the research community. Here one of the latest and most promising lossless compression algorithms is investigated on a variety of diffraction data like those routinely acquired at state-of-the-art MX beamlines.




experiment

Signal-to-noise and spatial resolution in in-line imaging. 1. Basic theory, numerical simulations and planar experimental images

Signal-to-noise ratio and spatial resolution are quantitatively analysed in the context of in-line (propagation based) X-ray phase-contrast imaging. It is known that free-space propagation of a coherent X-ray beam from the imaged object to the detector plane, followed by phase retrieval in accordance with Paganin's method, can increase the signal-to-noise in the resultant images without deteriorating the spatial resolution. This results in violation of the noise-resolution uncertainty principle and demonstrates `unreasonable' effectiveness of the method. On the other hand, when the process of free-space propagation is performed in software, using the detected intensity distribution in the object plane, it cannot reproduce the same effectiveness, due to the amplification of photon shot noise. Here, it is shown that the performance of Paganin's method is determined by just two dimensionless parameters: the Fresnel number and the ratio of the real decrement to the imaginary part of the refractive index of the imaged object. The relevant theoretical analysis is performed first, followed by computer simulations and then by a brief test using experimental images collected at a synchrotron beamline. More extensive experimental tests will be presented in the second part of this paper.




experiment

GIWAXS experimental methods at the NFPS-BL17B beamline at Shanghai Synchrotron Radiation Facility

The BL17B beamline at the Shanghai Synchrotron Radiation Facility was first designed as a versatile high-throughput protein crystallography beamline and one of five beamlines affiliated to the National Facility for Protein Science in Shanghai. It was officially opened to users in July 2015. As a bending magnet beamline, BL17B has the advantages of high photon flux, brightness, energy resolution and continuous adjustable energy between 5 and 23 keV. The experimental station excels in crystal screening and structure determination, providing cost-effective routine experimental services to numerous users. Given the interdisciplinary and green energy research demands, BL17B beamline has undergone optimization, expanded its range of experimental methods and enhanced sample environments for a more user-friendly testing mode. These methods include single-crystal X-ray diffraction, powder crystal X-ray diffraction, wide-angle X-ray scattering, grazing-incidence wide-angle X-ray scattering (GIWAXS), and fully scattered atom pair distribution function analysis, covering structure detection from crystalline to amorphous states. This paper primarily presents the performance of the BL17B beamline and the application of the GIWAXS methodology at the beamline in the field of perovskite materials.




experiment

Development and performance simulations of a soft X-ray and XUV split-and-delay unit at beamlines FL23/24 at FLASH2 for time-resolved two-color pump–probe experiments

The split-and-delay unit (SDU) at FLASH2 will be upgraded to enable the simultaneous operation of two temporally, spatially and spectrally separated probe beams when the free-electron laser undulators are operated in a two-color scheme. By means of suitable thin filters and an optical grating beam path a wide range of combinations of photon energies in the spectral range from 150 eV to 780 eV can be chosen. In this paper, simulations of the spectral transmission and performance parameters of the filter technique are discussed, along with a monochromator with dispersion compensation presently under construction.




experiment

Data collection is your last experiment




experiment

TAAM refinement on high-resolution experimental and simulated 3D ED/MicroED data for organic mol­ecules

3D electron diffraction (3D ED), or microcrystal electron diffraction (MicroED), has become an alternative technique for determining the high-resolution crystal structures of compounds from sub-micron-sized crystals. Here, we considered l-alanine, α-glycine and urea, which are known to form good-quality crystals, and collected high-resolution 3D ED data on our in-house TEM instrument. In this study, we present a comparison of independent atom model (IAM) and transferable aspherical atom model (TAAM) kinematical refinement against experimental and simulated data. TAAM refinement on both experimental and simulated data clearly improves the model fitting statistics (R factors and residual electrostatic potential) compared to IAM refinement. This shows that TAAM better represents the experimental electrostatic potential of organic crystals than IAM. Furthermore, we compared the geometrical parameters and atomic displacement parameters (ADPs) resulting from the experimental refinements with the simulated refinements, with the periodic density functional theory (DFT) calculations and with published X-ray and neutron crystal structures. The TAAM refinements on the 3D ED data did not improve the accuracy of the bond lengths between the non-H atoms. The experimental 3D ED data provided more accurate H-atom positions than the IAM refinements on the X-ray diffraction data. The IAM refinements against 3D ED data had a tendency to lead to slightly longer X—H bond lengths than TAAM, but the difference was statistically insignificant. Atomic displacement parameters were too large by tens of percent for l-alanine and α-glycine. Most probably, other unmodelled effects were causing this behaviour, such as radiation damage or dynamical scattering.




experiment

EMinsight: a tool to capture cryoEM microscope configuration and experimental outcomes for analysis and deposition

The widespread adoption of cryoEM technologies for structural biology has pushed the discipline to new frontiers. A significant worldwide effort has refined the single-particle analysis (SPA) workflow into a reasonably standardized procedure. Significant investments of development time have been made, particularly in sample preparation, microscope data-collection efficiency, pipeline analyses and data archiving. The widespread adoption of specific commercial microscopes, software for controlling them and best practices developed at facilities worldwide has also begun to establish a degree of standardization to data structures coming from the SPA workflow. There is opportunity to capitalize on this moment in the maturation of the field, to capture metadata from SPA experiments and correlate the metadata with experimental outcomes, which is presented here in a set of programs called EMinsight. This tool aims to prototype the framework and types of analyses that could lead to new insights into optimal microscope configurations as well as to define methods for metadata capture to assist with the archiving of cryoEM SPA data. It is also envisaged that this tool will be useful to microscope operators and facilities looking to rapidly generate reports on SPA data-collection and screening sessions.




experiment

HEIDI: an experiment-management platform enabling high-throughput fragment and compound screening

The Swiss Light Source facilitates fragment-based drug-discovery campaigns for academic and industrial users through the Fast Fragment and Compound Screening (FFCS) software suite. This framework is further enriched by the option to utilize the Smart Digital User (SDU) software for automated data collection across the PXI, PXII and PXIII beamlines. In this work, the newly developed HEIDI webpage (https://heidi.psi.ch) is introduced: a platform crafted using state-of-the-art software architecture and web technologies for sample management of rotational data experiments. The HEIDI webpage features a data-review tab for enhanced result visualization and provides programmatic access through a representational state transfer application programming interface (REST API). The migration of the local FFCS MongoDB instance to the cloud is highlighted and detailed. This transition ensures secure, encrypted and consistently accessible data through a robust and reliable REST API tailored for the FFCS software suite. Collectively, these advancements not only significantly elevate the user experience, but also pave the way for future expansions and improvements in the capabilities of the system.




experiment

CHiMP: deep-learning tools trained on protein crystallization micrographs to enable automation of experiments

A group of three deep-learning tools, referred to collectively as CHiMP (Crystal Hits in My Plate), were created for analysis of micrographs of protein crystallization experiments at the Diamond Light Source (DLS) synchrotron, UK. The first tool, a classification network, assigns images into categories relating to experimental outcomes. The other two tools are networks that perform both object detection and instance segmentation, resulting in masks of individual crystals in the first case and masks of crystallization droplets in addition to crystals in the second case, allowing the positions and sizes of these entities to be recorded. The creation of these tools used transfer learning, where weights from a pre-trained deep-learning network were used as a starting point and repurposed by further training on a relatively small set of data. Two of the tools are now integrated at the VMXi macromolecular crystallography beamline at DLS, where they have the potential to absolve the need for any user input, both for monitoring crystallization experiments and for triggering in situ data collections. The third is being integrated into the XChem fragment-based drug-discovery screening platform, also at DLS, to allow the automatic targeting of acoustic compound dispensing into crystallization droplets.