case study

Case Study : BT One Enterprise Cisco: Work without boundaries around the world

A BT One Enterprise Cisco solution makes voice calls between the UK and India simple and immediate. In most cases they’re free of charge too. Best of all, the employees feel as one with the BT global team.




case study

Case Study - Telepresence enables SASOL to operate as a single global community

SASOL has adopted telepresence for collaboration between far-flung executives in Europe, North America and Africa. Running over the BT IP Connect global network it also enables federation with customers and suppliers. The solution’s earned Alec’s team a special recognition award from the CEO for bringing the company’s One SASOL philosophy and vision vibrantly to life




case study

Case Study : GSK Nutritional Healthcare: Market leader makes customer care miles better

Care isn’t just part of the name at GSK Nutritional Healthcare. It’s at the heart of its customer help lines. But Ashley Thomas knew that the company’s legacy telephone technology was becoming a bit of a hindrance. Keen to boost customer service with new technology, a review of the market led Ashley to BT Cloud Contact




case study

Case Study – FIAT Group IMV: Virtual solution helps motor trader integrate its operations

An infrastructure offering high levels of quality and flexibility was required as a platform for a new system. Buying or renting new servers – and connecting and configuring them in short timescales – would be an expensive and challenging task. The IMV technical team turned to BT and asked whether it would be possible to set up the system in a virtual environment .




case study

Case Study : Tesco uses BT Cloud Contact technology to bring it closer to customers

Tesco, one of the world’s largest retailers, is using the BT Cloud Contact solution to give its UK customers an enhanced, more flexible and more responsive contact centre service.




case study

Germany: A cleantech case study for a post-Fukushima world

In the wake of the worst nuclear disaster in a generation, Germany doubled down on a decade of success, pledging to eliminate nukes by 2022 and switch almost ex




case study

Business Podcast Marketing Case Study Proves Results

Business Podcast Marketing Case Study Shows How Podcasting Delivers Dramatic Results for Client. Podcasting has significant business marketing potential. If the business podcast strategy and online visibility plan is properly executed; podcasting has the potential to be a marketing tool that delivers great marketing results.





case study

[Case Study] How Can The Future Electronic Component Industry Effectively Outsource its Customer Service?

Callnovo, the global multilingual customer service contact center, offers a powerful bilingual outsourcing solution in English and Spanish for Utsource's European market.




case study

Case Study: Reinvent This Retailer

Hear this story based on real events at J.C. Penney. A discussion with contributor Jill Avery and editor Andy O'Connell follows.




case study

Roads In Landscape Modeling: A Case Study of A Road Data Layer and Use In The Interior Northwest Landscape Analysis System

Roads are important ecological features of forest landscapes, but their cause-andeffect relationships with other ecosystem components are only recently becoming included in integrated landscape analyses. Simulation models can help us to understand how forested landscapes respond over time to disturbance and socioeconomic factors, and potentially to address the important role roads play in these processes.




case study

Wood energy in Alaska-case study evaluations of selected facilities.

Biomass resources in Alaska are extensive and diverse, comprising millions of acres of standing small-diameter trees, diseased or dead trees, and trees having lowgrade timber.




case study

Urban forest restoration cost modeling: a Seattle natural areas case study

Cities have become more committed to ecological restoration and management activities in urban natural areas.




case study

Case study comparison of two pellet heating facilities in southeastern Alaska

Over the past decade, wood-energy use in Alaska has grown dramatically.




case study

Credulous Users and Fake News: a Real Case Study on the Propagation in Twitter. (arXiv:2005.03550v1 [cs.SI])

Recent studies have confirmed a growing trend, especially among youngsters, of using Online Social Media as favourite information platform at the expense of traditional mass media. Indeed, they can easily reach a wide audience at a high speed; but exactly because of this they are the preferred medium for influencing public opinion via so-called fake news. Moreover, there is a general agreement that the main vehicle of fakes news are malicious software robots (bots) that automatically interact with human users. In previous work we have considered the problem of tagging human users in Online Social Networks as credulous users. Specifically, we have considered credulous those users with relatively high number of bot friends when compared to total number of their social friends. We consider this group of users worth of attention because they might have a higher exposure to malicious activities and they may contribute to the spreading of fake information by sharing dubious content. In this work, starting from a dataset of fake news, we investigate the behaviour and the degree of involvement of credulous users in fake news diffusion. The study aims to: (i) fight fake news by considering the content diffused by credulous users; (ii) highlight the relationship between credulous users and fake news spreading; (iii) target fake news detection by focusing on the analysis of specific accounts more exposed to malicious activities of bots. Our first results demonstrate a strong involvement of credulous users in fake news diffusion. This findings are calling for tools that, by performing data streaming on credulous' users actions, enables us to perform targeted fact-checking.




case study

Does Multi-Encoder Help? A Case Study on Context-Aware Neural Machine Translation. (arXiv:2005.03393v1 [cs.CL])

In encoder-decoder neural models, multiple encoders are in general used to represent the contextual information in addition to the individual sentence. In this paper, we investigate multi-encoder approaches in documentlevel neural machine translation (NMT). Surprisingly, we find that the context encoder does not only encode the surrounding sentences but also behaves as a noise generator. This makes us rethink the real benefits of multi-encoder in context-aware translation - some of the improvements come from robust training. We compare several methods that introduce noise and/or well-tuned dropout setup into the training of these encoders. Experimental results show that noisy training plays an important role in multi-encoder-based NMT, especially when the training data is small. Also, we establish a new state-of-the-art on IWSLT Fr-En task by careful use of noise generation and dropout methods.




case study

Streaming & Listening Diversity - Spotify Case Study

Will Artists Have An Easier Time Finding An Audience, Or Will Streaming Focus Global Attention On A Small Number Of Stars?




case study

Transatlantic Rifts: Asia-Pacific Scenario Case Study

3 February 2016

Drawing on the findings of a recent workshop exploring a potential conflict between China and Japan over disputed islands, this paper suggests there are significant differences between how the United States and Europe prioritize their interests in the Asia-Pacific.

Xenia Wickett

Former Head, US and the Americas Programme; Former Dean, The Queen Elizabeth II Academy for Leadership in International Affairs

Dr Jacob Parakilas

Former Deputy Head, US and the Americas Programme

2016-02-03-transatlantic-rift.jpg

A Japanese activist on board a boat is silhouetted at sunrise as it approaches the Senkaku/Diaoyu Islands, 19 August 2012. Photo by Getty Images.

Summary

  • Chatham House brought together European, Asian and American policy-makers and experts over the course of a two-day scenario workshop in November 2015. The participants were asked to take part in a structured role-playing exercise imagining a potential near-future conflict between China and Japan over disputed islands.
  • The findings of the workshop, and the actions of participants in the simulation, suggested significant differences between how the United States and Europe prioritize their interests in the Asia-Pacific. In particular, the perception was that the European Union and its member states consider challenges from their ‘near abroad’ as more tangible than those emanating from Asia, and that they focus on commercial opportunities in the region. In contrast, US foreign policy in the Asia-Pacific is seen as emphasizing strategic and geopolitical challenges.
  • In terms of military capabilities, Europeans view themselves as having few assets to bring to bear in Asia. European, American and Asian observers are largely unaware of French and British military capabilities in or near the region.
  • Beyond the military, Europe’s other tools of leverage – diplomatic, development, economic and other soft-power instruments – are also ignored. Europeans are often unaware of the activities of their own governments in the region. This is equally true in reverse – Japan’s engagement vis-à-vis European interests (such as with respect to Russia or Syria) is little recognized by Europeans.
  • European nations prefer to engage unilaterally with Asia on trade and multilaterally, through the EU, on security and geopolitical issues. However, no ideal forum for multilateral coordination exists (given the fact that the EU is not a member of most Asian regional organizations).
  • The US’s greater engagement in Asia reflects the fact that the US, unlike its European counterparts, is a Pacific nation. But it can also be explained by greater domestic public support for such engagement. This reflects the presence of significant numbers of US troops in Asia and the relatively high proportion of ethnic Asians in the US compared with the EU.

Department/project




case study

Case Study: Cognitive Impairment, Depression, and Severe Hypoglycemia

John Zrebiec
Oct 1, 2006; 19:212-215
Clinical Decision Making




case study

Case Study: A Patient With Type 2 Diabetes Working With an Advanced Practice Pharmacist to Address Interacting Comorbidities

Peggy Yarborough
Jan 1, 2003; 16:
Case Studies




case study

Case Study: A Patient With Uncontrolled Type 2 Diabetes and Complex Comorbidities Whose Diabetes Care Is Managed by an Advanced Practice Nurse

Geralyn Spollett
Jan 1, 2003; 16:
Case Studies




case study

Case Study: Seizures and Hypoglycemia

Michael R. Brennan
Jan 1, 2012; 30:23-24
Case Studies




case study

Case Study: Type 1 and Type 2, Too?

Heidi L. Gassner
Jul 1, 2003; 21:
Case Studies




case study

Case Study: A 43-Year-Old Man With Perineal Pain and Swelling

David J. Meier
Oct 1, 2001; 19:
Case Studies




case study

Case Study: Renal Disease in Type 1 Diabetes

William H. Herman
Apr 1, 2001; 19:
Case Studies




case study

Case Study: Postsexual Penile Ulcer as a Symptom of Diabetes

Nehman Lauder
Oct 1, 2005; 23:191-192
Case Studies




case study

Case Study: New-Onset Diabetes: How to Tell the Difference Between Type 1 and Type 2 Diabetes

Joseph Largay
Jan 1, 2012; 30:25-26
Case Studies




case study

Case Study: Treating Hypertension in Patients With Diabetes

Evan M. Benjamin
Jul 1, 2004; 22:137-138
Case Studies




case study

Case Study: Diabetic Ketoacidosis in Type 2 Diabetes: "Look Under the Sheets"

Brian J. Welch
Oct 1, 2004; 22:198-200
Case Studies




case study

Case Study: Potential Pitfalls of Using Hemoglobin A1c as the Sole Measure of Glycemic Control

Huy A. Tran
Jul 1, 2004; 22:141-143
Case Studies




case study

Ethics and 269W : a case study / Greg Mead SC, Legal Services Commission.




case study

Seismic processing, inversion, and AVO for gold exploration : case study from Western Australia / Christopher B. Harrison and Milovan Urosevic.

"We investigate the potential of using high-resolution seismic methods for rock characterization and for targeting of gold deposits at the St. Ives gold camp. The application of seismic methods in hard-rock environments is challenged by complex structures, intrinsically low signal-to-noise ratio, regolith distortions, and access restrictions. If these issues can be addressed, then the unparalleled resolving power of reflection seismic can be used for mineral exploration. Appropriate spatial sampling of the wavefield combined with a survey geometry design and rigorous data processing to incorporate high fold and long offsets are necessary for creation of high-quality seismic images. In the hard-rock environment of Western Australia, accurate static corrections and multiphase velocity analysis are essential processing steps. This is followed by a rigorous quality control following each processing step. In such a case, we show that the role of reflection seismic could be lifted from mere identification of first-order structures to refined lithological analyses. Five deep boreholes with sonic logs and core sample test data wer eused to calibrate 2D seismic images. Despite seismic images were produced with relatively robust scaling it was possible to achieve reasonably high seismic-log correlation across three of the tightly spaced boreholes using a single composite wavelet. Amplitude-versus-offset (AVO) analysis indicated that gold-bearing structures may be related to elevated AVO effect and increased reflectivity. Consequently, partial stack analysis and acoustic and elastic inversions were conducted. These results and impedance crossplots were then evaluated against known gold occurrences. While still in the preliminary stages, hard-rock seismic imaging, inversion, and the application of AVO techniques indicated significant potential for targeting mineral reserves" -- Summary.




case study

Interim technical report : project 3.1 : lower cost, more effective 3D seismic exploration for hard rock environments : seismic exploration for mineral deposits case study summary table / authors: Milovan Urosevic, Andrej Bona.

"This document is intended to summarise the current state, understanding and the use of seismic reflection method for mineral exploration. Its primary objective is to provide a point of reference, based on actual case studies, for mineral explorers interested in the application of seismic methods to their project. It provides summary information (including the purpose of the survey, acquisition methods, geometry and cost, processing procedures and key findings) that are intended to provide the reader with an objective means of assessing the cost effectiveness of the technique with respect to exploration objectives. It is also aimed at exchange of information between DETCRC sponsors, affiliates and researchers. Finally we hope that this table will help in shaping future research efforts and direction within Project 3.1, regarding the application of seismic for mineral exploration.This is only initial work and it is hoped that it will evolve into a document or a catalogue that will be extensively used by the mineral industry to achieve their exploration objectives in more efficient and effective way" -- Executive summary.




case study

A stochastic user-operator assignment game for microtransit service evaluation: A case study of Kussbus in Luxembourg. (arXiv:2005.03465v1 [physics.soc-ph])

This paper proposes a stochastic variant of the stable matching model from Rasulkhani and Chow [1] which allows microtransit operators to evaluate their operation policy and resource allocations. The proposed model takes into account the stochastic nature of users' travel utility perception, resulting in a probabilistic stable operation cost allocation outcome to design ticket price and ridership forecasting. We applied the model for the operation policy evaluation of a microtransit service in Luxembourg and its border area. The methodology for the model parameters estimation and calibration is developed. The results provide useful insights for the operator and the government to improve the ridership of the service.




case study

Case Study: Optimizing Cyberlink PowerDVD to improve battery life on Intel devices

  Introduction Low battery life is one of the most serious issues currently plaguing mobile devices in general and Ultrabook™ devices and tablets specifically. Users have become accustomed to s...




case study

Case Study: Deliveing an immersive gaming experience for Intel based hybrid devices

  Abstract Tencent wanted to give gamers the best experience on Intel® Ultrabook™ and 2 in 1 systems. Legend of Xuan Yuan was already a successful game, but these systems provided Tencent w...




case study

Case Study: How to adapt multiple input methods on Intel based hybrid devices

  Trine* 2 from Frozenbyte, Inc. struggled with optimal playability on Intel® processor-based touchscreens and 2 in 1s running Windows* 8. Supporting varied play styles and local multiplayer require...




case study

Case Study: Developing an augmented reality app for Intel based devices

  With augmented reality (AR) reaching smartphones, tablets, wearables (such as Google Glass*), and other platforms, the market is ripe for an AR development explosion across every conceivable applicati...




case study

Case Study: Developing a Health App for Windows 8

  Many people take medication, sometimes multiple times per day, to help them stay healthy. Making sure meds are taken on time and in the right doses requires an individual to be vigilant and discipline...




case study

Case Study: Building an award winning multi-touch enabled music app

  Innovations in computing form factors such as All-in-One (AIO) and tablet devices that combine desktop-like performance with multi-touch-enabled, high-resolution screens are giving people fun new ways...




case study

Mixing It Up in Hardware (an Advantest Case Study in Faster Full-Chip Simulations)

Key Findings: Advantest, in mixed-signal SoC design, sees 50X speedup, 25 day test reduced to 12 hours, dramatic test coverage increase.

Trolling through the CDNLive archives, I discovered another gem. At the May 2013 CDNLive in Munich, Thomas Henkel and Henriette Ossoinig of Advantest presented a paper titled “Timing-accurate emulation of a mixed-signal SoC using Palladium XP”. Advantest makes advanced electronics test equipment. Among the semiconductor designs they create for these products is a test processor chip with over 100 million logic transistors, but also with lots of analog functions.They set out to find a way to speed up their full-chip simulations to a point where they could run the system software. To do that, they needed about a 50X speed-up. Well, they did it!


Figure 1: Advantest SoC Test Products

 

To skip the commentary, read Advantest's paper here

Problem Statement

Software is becoming a bigger part of just about every hardware product in every market today, and that includes the semiconductor test market. To achieve high product quality in the shortest amount of time, the hardware and software components need to be verified together as early in the design cycle as possible. However, the throughput of a typical software RTL simulation is not sufficient to run significant amounts of software on a design with hundreds of millions of transistors.  

Executing software on RTL models of the hardware means long runs  (“deep cycles”) that are a great fit for an emulator, but the mixed-signal content posed a new type of challenge for the Advantest team.  Emulators are designed to run digital logic. Analog is really outside of the expected use model. The Advantest team examined the pros and cons of various co-simulation and acceleration flows intended for mixed signal and did not feel that they could possibly get the performance they needed to have practical runtimes with software testbenches. They became determined to find a way to apply their Palladium XP platform to the problem.

Armed with the knowledge of the essential relationship between the analog operations and the logic and software operations, the team was able to craft models of the analog blocks using reduction techniques that accurately depicted the essence of the analog function required for hardware-software verification without the expense of a continuous time simulation engine.

The requirements boiled down to the following:

• Generation of digital signals with highly accurate and flexible timing

• Complete chip needs to run on Palladium XP platform

• Create high-resolution timing (100fs) with reasonable emulation performance, i.e. at least 50X faster than simulation on the fastest workstations

Solution Idea

The solution approach chosen was to simplify the functional model of the analog elements of the design down to generation of digital signal edges with high timing accuracy. The solution employed a fixed-frequency central clock that was used as a reference.Timing-critical analog signals used to produce accurately placed digital outputs were encoded into multi-bit representations that modeled the transition and timing behavior. A cell library was created that took the encoded signals and converted them to desired “regular signals”. 

Automation was added to the process by changing the netlisting to widen the analog signals according to user-specified schematic annotations. All of this was done in a fashion that is compatible with debugging in Cadence’s Simvision tool.  Details on all of these facets to follow.

The Timing Description Unit (TDU) Format

The innovative thinking that enabled the use of Palladium XP was the idea of combining a reference clock and quantized signal encoding to create offsets from the reference. The implementation of these ideas was done in a general manner so that different bit widths could easily be used to control the quantization accuracy.

 

Figure 2: Quantization method using signal encoding

 

Timed Cell Modeling

You might be thinking – timing and emulation, together..!?  Yes, and here’s a method to do it….

The engineering work in realizing the TDU idea involved the creation of a library of cells that could be used to compose the functions that convert the encoded signal into the “real signals” (timing-accurate digital output signals). Beyond some basic logic cells (e.g., INV, AND, OR, MUX, DFF, TFF, LATCH), some special cells such as window-latch, phase-detect, vernier-delay-line, and clock-generator were created. The converter functions were all composed from these basic cells. This approach ensured an easy path from design into emulation.

The solution was made parameterizable to handle varying needs for accuracy.  Single bit inputs need to be translated into transitions at offset zero or a high or low coding depending on the previous state.  Single bit outputs deliver the final state of the high-resolution output either at time zero, the next falling, or the next rising edge of the grid clock, selectable by parameter. Output transitions can optionally be filtered to conform to a configurable minimum pulse width.

Timed Cell Structure

There are four critical elements to the design of the conversion function blocks (time cells):

                Input conditioning – convert to zero-offset, optional glitch preservation, and multi-cycle path

                Transition sorting – sort transitions according to timing offset and specified precedence

                Function – for each input transition, create appropriate output transition

                Output filtering – Capability to optionally remove multiple transitions, zero-width, pulses, etc.

Timed Cell Caveat

All of the cells are combinational and deliver a result in the same cycle of an input transition. This holds for storage elements as well. For example a DFF will have a feedback to hold its state. Because feedback creates combinational loops, the loops need a designation to be broken (using a brk input conditioning function in this case – more on this later). This creates an additional requirement for flip-flop clock signals to be restricted to two edges per reference clock cycle.

Note that without minimum width filtering, the number of output transitions of logic gates is the sum of all input transitions (potentially lots of switching activity). Also note that the delay cell has the effect of doubling the number of output transitions per input transition.

 

Figure 3: Edge doubling will increase switching during execution

 

SimVision Debug Support

The debug process was set up to revolve around VCD file processing and directed and viewed within the SimVision debug tool. In order to understand what is going on from a functional standpoint, the raw simulation output processes the encoded signals so that they appear as high-precision timing signals in the waveform viewer. The flow is shown in the figure below.

 

Figure 4: Waveform post-processing flow

 

The result is the flow is a functional debug view that includes association across representations of the design and testbench, including those high-precision timing signals.

 

Figure 5: Simvision debug window setup

 

Overview of the Design Under Verification (DUV)

Verification has to prove that analog design works correctly together with the digital part. The critical elements to verify include:

• Programmable delay lines move data edges with sub-ps resolution

• PLL generates clocks with wide range of programmable frequency

• High-speed data stream at output of analog is correct

These goals can be achieved only if parts of the analog design are represented with fine resolution timing.

 

Figure 6: Mixed-signal design partitioning for verification

 

How to Get to a Verilog Model of the Analog Design

There was an existing Verilog cell library with basic building blocks that included:

- Gates, flip-flops, muxes, latches

- Behavioral models of programmable delay elements, PLL, loop filter, phase detector

With a traditional simulation approach, a cell-based netlist of the analog schematic is created. This netlist is integrated with the Verilog description of the digital design and can be simulated with a normal workstation. To use Palladium simulation, the (non-synthesizable) portions of the analog design that require fine resolution timing have to be replaced by digital timing representation. This modeling task is completed by using a combination of the existing Verilog cell library and the newly developed timed cells.

Loop Breaking

One of the chief characteristics of the timed cells is that they contain only combinational cells that propagate logic from inputs to outputs. Any feedback from a cell’s transitive fanout back to an input creates a combinational loop that must be broken to reach a steady state. Although the Palladium XP loop breaking algorithm works correctly, the timed cells provided a unique challenge that led to unpredictable results.  Thus, a process was developed to ensure predictable loop breaking behavior. The user input to the process was to provide a property at the loop origin that the netlister recognized and translated to the appropriate loop breaking directives.

Augmented Netlisting

Ease of use and flow automation were two primary considerations in creating a solution that could be deployed more broadly. That made creating a one-step netlisting process a high-value item. The signal point annotation and automatic hierarchy expansion of the “digital timing” parameter helped achieve that goal. The netlister was enriched to identify the key schematic annotations at any point in the hierarchy, including bit and bus signals.

Consistency checking and annotation reporting created a log useful in debugging and evolving the solution.

Wrapper Cell Modeling and Verification

The netlister generates a list of schematic instances at the designated “netlister stop level” for each instance the requires a Verilog model with fine resolution timing. For the design in this paper there were 160 such instances.

The library of timed cells was created; these cells were actually “wrapper” cells comprised of the primitives for timed cell modeling described above. A new verification flow was created that used the behavior of the primitive cells as a reference for the expected behavior of the composed cells. The testing of the composed cells included had the timing width parameter set to 1 to enable direct comparison to the primitive cells. The Cadence Incisive Enterprise Simullator tool was successfully employed to perform assertion-based verification of the composed cells versus the existing primitive cells.

Mapping and Long Paths

Initial experiments showed that inclusion of the fine resolution timed cells into the digital emulation environment would about double the required capacity per run. As previously pointed out, the timed cells having only combinational forward paths creates a loop issue. This fact also had the result of creating some such paths that were more than 5,000 steps of logic. A timed cell optimization process helped to solve this problem. The basic idea was to break the path up by adding flip-flops in strategic locations to reduce combinational path length. The reason that this is important is that the maximum achievable emulation speed is related to combinational path length.

Results

Once the flow was in place, and some realistic test cases were run through it, some further performance tuning opportunities were discovered to additionally reduce runtimes (e.g., Palladium XP tbrun mode was used to gain speed). The reference used for overall speed gains on this solution was versus a purely software-based solution on the highest performance workstation available.

The findings of the performance comparison were startlingly good:

• On Palladium XP, the simulation speed is 50X faster than on Advantest’s fastest workstation

• Software simulation running 25 days can now be run in 12 hours -> realistic runtime enables long-running tests that were not feasible before

• Now have 500 tests that execute once in more than 48 hours

• They can be run much more frequently using randomization and this will increase test coverage dramatically

Steve Carlson




case study

Case study: mortgage fraud

LSG Litigation regularly deal with cases involving mortgage fraud. The key to combating mortgage fraud is early detection, comprehensive and vigorous investigation and immediate recovery action. A Case Study In October 2008, we were instructed to...




case study

A case study in receivership practice: Devon Commercial Property Limited v Robert Adrian Barnett, Robert John Blecher

Key Points • The self-dealing rule does not extend to a sale by a receiver to a party in which the mortgagee has an interest. • Although the duties of a receiver and a mortgagee are similar as to the Property, a receiver, unlike a mortgage...




case study

Advancing the K-12 Reform from the Ground: A Case Study in the Philippines

This paper describes the implementation of the Certificate in Educational Studies in Leadership (CESL) in the Philippines as a professional development initiative delivered in a customized blended learning mode.




case study

Seismic imaging of melanges; Pieniny Klippen Belt case study

The authors present results of the first high-resolution deep seismic reflection survey in the Pieniny Klippen Belt (PKB) in Poland. This survey sheds new light on the matter of olistostromes and the mélange character of the PKB. The sedimentary mass-transport deposits represented by olistoliths and olistostromes manifest themselves by different petrophysical parameters of rocks (velocity, density and resistivity) and seismic attributes. Seismic attributes are very effective in the interpretation of the geology of complex mélanges. The authors used selected attributes: low-pass filter, energy, energy gradient, dip-steered median filter, Prewitt filter, Laplacian edge enhancing filter and square root of the energy gradient. These attributes emphasize changes of the seismic image inside mélange zones. The distinguished olistoliths are now inside imbricated thrust structures and they are tectonically rearranged. Polygenetic mélanges in the PKB originated as a result of sedimentary and tectonic processes. The PKB in the investigated area forms several north-vergent thrust sheets belonging to the Złatne and Hulina nappes. Both nappes contain large chaotic, non-reflective olistoliths as well as the smaller mainly high-reflective olistoliths. Olistoliths are arranged parallel to the flysch layering and thrusts. The results presented confirm the postulated two olistostrome belts within the PKB structure.

Thematic collection: This article is part of the Polygenetic mélanges collection available at: https://www.lyellcollection.org/cc/polygenetic-melanges




case study

Layering and structural inheritance controls on fault zone structure in three dimensions: a case study from the northern Molasse Basin, Switzerland

Mechanical heterogeneity of a sedimentary sequence exerts a primary control on the geometry of fault zones and the proportion of offset accommodated by folding. The Wildensbuch Fault Zone in the Swiss Molasse Basin, with a maximum throw of 40 m, intersects a Mesozoic section containing a thick (120 m) clay-dominated unit (Opalinus Clay) over- and underlain by more competent limestone units. Interpretation of a 3D seismic reflection survey indicates that the fault zone formed by upward propagation of an east–west-trending basement structure, through the Mesozoic section, in response to NE–SW Miocene extension. This configuration formed an array of left-stepping normal fault segments above and below the Opalinus Clay. In cross-section a broad monoclinal fold is observed in the Opalinus Clay. Folding, however, is not ubiquitous and occurs in the Opalinus Clay where fault segments above and below are oblique to one another; where they are parallel the fault passes through the Opalinus Clay with little folding. These observations demonstrate that, even in strongly heterogeneous sequences, here a four-fold difference in both Young's modulus and cohesion between layers, the occurrence of folding may depend on the local relationship between fault geometry and applied stress field rather than rheological properties alone.




case study

A case study for identification of organic-silt bottom sediments in an artificial lake formed in gravel alluvium in the geotourism locality of Slnecne Jazera in Senec (Bratislava, Slovakia)

This case study aims to identify the cubic capacity and geometry of the geological body of silt–organic sediments in the environment of a former gravel pit situated in a drainless depression of the alluvium of the Čierna voda River. It is located in the well-known geotourism locality of Slnečné Jazera in Senec, in the SW of Slovakia. To identify the body, electrical resistivity tomography was combined with the use of sonar. The research shows that this approach is appropriate for a number of activities that are subjects of engineering-geological investigations. The cubic capacity and geometry of specific aqueous engineering-geological environments must be determined in connection with the need for the management, control, quantification and extraction of selected sedimentary bodies. In addition, the management of sustainable development of reservoirs, sedimentation basins, industrial ponds, settling pits and natural pools for recreation (the subject of the case study) is important to control the limit amounts of sediments in such environments. The results of this study may be applied in analogous engineering-geological conditions. The drainless depression Slnečné Jazera contained a body of silt–organic sediments amounting to 23 000 m3 (41 Olympic-size pools of 25 m x 12.5 m x 1.8 m). The maximum thickness of the bottom sediments was about 6.3 m on the alluvium with an articulated morphology owing to the submerged digging of gravel. The proposed approach improved the control of extraction of the sedimentary body and optimized the engineering-geological conditions in this geotourism locality.




case study

Stability analyses of large waste dumps via 3D numerical modelling considering cracks and earthquake loading: a case study of Zhujiabaobao waste dump

This paper uses a 3D model for stability assessment of Zhujiabaobao waste dump with ground cracks. The study data were gathered via reconnaissance, geomorphological analysis and laboratory experiment. A 3D finite extended element method model that can consider cracks was then used to calculate the factor of safety (FOS) of the waste dump via the strength reduction technique. The simulation shows the dump to have an FOS of 1.22 and both the position and depth of penetration of cracks in the waste dump have a crucial impact on the stability of the slope. Because the study area is located in a seismically active area, simulation and analysis of the dynamic response of the waste dump under different magnitudes of seismic waves (peak acceleration is 0.05, 0.15, 0.25 and 0.45g) were performed via an explicit dynamic model. The simulation shows that high steps in the slope are particularly responsive to earthquakes. The approach used here for analysing stability under static and dynamic loads is useful for hazard prevention and mitigation.




case study

Investigating the Effects of the Chemical Composition on Glass Corrosion: A Case Study for Type I Vials

Glass is the favorite material for parenteral packaging because of its physico-chemical properties. Type I borosilicate glass is worldwide use at this scope, but it may have some issues related to breakage, corrosion and delamination that might compromise the drug quality, safety and efficacy. These issues can be mitigated and avoided starting from the appropriate selection of the most suitable raw material at the early stage of the glass container design. In this study, Type I borosilicate glass vials manufactured using two glass tubes having different chemical compositions, were studied and compared in terms of their resistance to corrosion. Testing design was applied with the aim to select the best practice approach comparing different storage simulation conditions: ageing treatment through autoclaving and stability testing (real-time and accelerated). Clear differences were found between the different glass types in terms of hydrolytic and corrosion resistance that highlighted the relation between chemical composition and glass chemical durability. Non-negligible differences were also observed using different storage conditions.




case study

Tumoral and immune heterogeneity in an anti-PD-1-responsive glioblastoma: a case study [RESEARCH REPORT]

Clinical benefit of immune checkpoint blockade in glioblastoma (GBM) is rare, and we hypothesize that tumor clonal evolution and the immune microenvironment are key determinants of response. Here, we present a detailed molecular characterization of the intratumoral and immune heterogeneity in an IDH wild-type, MGMT-negative GBM patient who plausibly benefited from anti-PD-1 therapy with an unusually long 25-mo overall survival time. We leveraged multiplex immunohistochemistry, RNA-seq, and whole-exome data from the primary tumor and three resected regions of recurrent disease to survey regional tumor-immune interactions, genomic instability, mutation burden, and expression profiles. We found significant regional heterogeneity in the neoantigenic and immune landscape, with a differential T-cell signature among recurrent sectors, a uniform loss of focal amplifications in EGFR, and a novel subclonal EGFR mutation. Comparisons with recently reported correlates of checkpoint blockade in GBM and with TCGA-GBM revealed appreciable intratumoral heterogeneity that may have contributed to a differential PD-1 blockade response.