x model

The Penta Helix Model of Innovation in Oman: An HEI Perspective

Aim/Purpose: Countries today strategically pursue regional development and economic diversification to compete in the world market. Higher Education Institutions (HEIs) are at the crux of this political strategy. The paper reviews how HEIs can propel regional socio-economic growth and development by way of research innovation and entrepreneurship. Background: Offering an academic perspective about the role of HEIs using the Penta Helix innovation network for business and social innovation, the paper discusses opportunities and challenges in gestating an innovation culture. It likewise seeks, identifies and details strategies and workable programs. Methodology: Best-practice innovation campaigns initiated by Omani HEIs in collaboration with capstone programs organized by the government were parsed from selected local and international literature. The study includes a causal analysis of innovation information contained in 40 out of 44 published OAAA Quality Audit reports about HEIs from 2009 to 2016. The best-practice programs serve as success indicators and will be used as a field metric effect a Penta Helix blueprint for innovation. Contribution: The paper discusses how HEIs can engender, nurture, drive, and sustain innovation and entrepreneurial activity by using an innovation strategic blueprint like the Penta Helix model. It gathers together the recent historical attempts at promoting innovation by HEIs. It likewise suggests the creation of a network channel to allow key players in the innovation network to share innovation information and to collaborate with each other. Furthermore, it contributes to the development of innovation culture in HEIs. Findings: Expectations run high in academia. For one, universities believe that all innovations embryonically begin within their halls. Universities–too–believe it is naturally incumbent on them to stimulate and advance innovation despite that most innovation programs are initiated by the government in Oman. HEI engagement is perceivably still weak. HEIs have yet to come out as a strong leading force in promoting systems of innovation. There is clear awareness of the need to adopt leading-edge practices in innovation strategy and management, curriculum and assessment, staff support and reward systems, funding and ICT infrastructure, research commercialization and IP management, and community engagement. Recommendations for Practitioners: There is need to conduct more in-depth analyses about the synergy and partnerships between key players of the Penta Helix model. A large-scale survey will help completely reveal the status and impact of innovation practices in the region and among HEIs. Recommendation for Researchers: There is need to conduct more in-depth analyses about the synergy and partnerships between key players of the Penta Helix model. A large-scale survey will help completely reveal the status and impact of innovation practices in the region and among HEIs. Impact on Society: The paper hopes to influence policy. It fully intends to convince policymakers increase the adoption of strategic interventions. The paper is not a theoretical description of the problem. It suggests several concrete courses of action. Future Research: The paper has seen the need to measure the effectiveness of the current innovation practices among key players in the innovation network and how these practices advance Oman’s knowledge economy. We propose a Likert-based bottom-up engagement metric.




x model

Advancements in the DRG system payment: an optimal volume/procedure mix model for the optimisation of the reimbursement in Italian healthcare organisations

In Italy, the reimbursement provided to healthcare organisations for medical and surgical procedures is based on the diagnosis related group weight (DRGW), which is an increasing function of the complexity of the procedures. This makes the reimbursement an upper unlimited function. This model does not include the relation of the volume with the complexity. The paper proposes a mathematical model for the optimisation of the reimbursement by determining the optimal mix of volume/procedure, considering the relation volume/complexity and DRGW/complexity. The decreasing, linear, and increasing returns to scale have been defined, and the optimal solution found. The comparison of the model with the traditional approach shows that the proposed model helps the healthcare system to discern the quantity of the reimbursement to provide to health organisations, while the traditional approach, neglecting the relation between the volume and the complexity, can result in an overestimation of the reimbursement.




x model

X-ray standing wave characterization of the strong metal–support interaction in Co/TiOx model catalysts

The strong metal–support interaction (SMSI) is a phenomenon observed in supported metal catalyst systems in which reducible metal oxide supports can form overlayers over the surface of active metal nanoparticles (NPs) under a hydrogen (H2) environment at elevated temperatures. SMSI has been shown to affect catalyst performance in many reactions by changing the type and number of active sites on the catalyst surface. Laboratory methods for the analysis of SMSI at the nanoparticle-ensemble level are lacking and mostly based on indirect evidence, such as gas chemisorption. Here, we demonstrate the possibility to detect and characterize SMSIs in Co/TiOx model catalysts using the laboratory X-ray standing wave (XSW) technique for a large ensemble of NPs at the bulk scale. We designed a thermally stable MoNx/SiNx periodic multilayer to retain XSW generation after reduction with H2 gas at 600°C. The model catalyst system was synthesized here by deposition of a thin TiOx layer on top of the periodic multilayer, followed by Co NP deposition via spare ablation. A partial encapsulation of Co NPs by TiOx was identified by analyzing the change in Ti atomic distribution. This novel methodological approach can be extended to observe surface restructuring of model catalysts in situ at high temperature (up to 1000°C) and pressure (≤3 mbar), and can also be relevant for fundamental studies in the thermal stability of membranes, as well as metallurgy.




x model

[ Y.3057 (12/21) ] - A trust index model for ICT infrastructures and services

A trust index model for ICT infrastructures and services




x model

Creating a Marketing Mix Model for a Better Marketing Budget: Analytics Corner

Using R programming, marketers can create a marketing mix model to determine how sustainable their audience channels are, and make better ad spend decisions. Here's how




x model

Predictions and Policymaking: Complex Modelling Beyond COVID-19

1 April 2020

Yasmin Afina

Research Assistant, International Security Programme

Calum Inverarity

Research Analyst and Coordinator, International Security Programme
The COVID-19 pandemic has highlighted the potential of complex systems modelling for policymaking but it is crucial to also understand its limitations.

GettyImages-1208425931.jpg

A member of the media wearing a protective face mask works in Downing Street where Britain's Prime Minister Boris Johnson is self-isolating in central London, 27 March 2020. Photo by TOLGA AKMEN/AFP via Getty Images.

Complex systems models have played a significant role in informing and shaping the public health measures adopted by governments in the context of the COVID-19 pandemic. For instance, modelling carried out by a team at Imperial College London is widely reported to have driven the approach in the UK from a strategy of mitigation to one of suppression.

Complex systems modelling will increasingly feed into policymaking by predicting a range of potential correlations, results and outcomes based on a set of parameters, assumptions, data and pre-defined interactions. It is already instrumental in developing risk mitigation and resilience measures to address and prepare for existential crises such as pandemics, prospects of a nuclear war, as well as climate change.

The human factor

In the end, model-driven approaches must stand up to the test of real-life data. Modelling for policymaking must take into account a number of caveats and limitations. Models are developed to help answer specific questions, and their predictions will depend on the hypotheses and definitions set by the modellers, which are subject to their individual and collective biases and assumptions. For instance, the models developed by Imperial College came with the caveated assumption that a policy of social distancing for people over 70 will have a 75 per cent compliance rate. This assumption is based on the modellers’ own perceptions of demographics and society, and may not reflect all societal factors that could impact this compliance rate in real life, such as gender, age, ethnicity, genetic diversity, economic stability, as well as access to food, supplies and healthcare. This is why modelling benefits from a cognitively diverse team who bring a wide range of knowledge and understanding to the early creation of a model.

The potential of artificial intelligence

Machine learning, or artificial intelligence (AI), has the potential to advance the capacity and accuracy of modelling techniques by identifying new patterns and interactions, and overcoming some of the limitations resulting from human assumptions and bias. Yet, increasing reliance on these techniques raises the issue of explainability. Policymakers need to be fully aware and understand the model, assumptions and input data behind any predictions and must be able to communicate this aspect of modelling in order to uphold democratic accountability and transparency in public decision-making.

In addition, models using machine learning techniques require extensive amounts of data, which must also be of high quality and as free from bias as possible to ensure accuracy and address the issues at stake. Although technology may be used in the process (i.e. automated extraction and processing of information with big data), data is ultimately created, collected, aggregated and analysed by and for human users. Datasets will reflect the individual and collective biases and assumptions of those creating, collecting, processing and analysing this data. Algorithmic bias is inevitable, and it is essential that policy- and decision-makers are fully aware of how reliable the systems are, as well as their potential social implications.

The age of distrust

Increasing use of emerging technologies for data- and evidence-based policymaking is taking place, paradoxically, in an era of growing mistrust towards expertise and experts, as infamously surmised by Michael Gove. Policymakers and subject-matter experts have faced increased public scrutiny of their findings and the resultant policies that they have been used to justify.

This distrust and scepticism within public discourse has only been fuelled by an ever-increasing availability of diffuse sources of information, not all of which are verifiable and robust. This has caused tension between experts, policymakers and public, which has led to conflicts and uncertainty over what data and predictions can be trusted, and to what degree. This dynamic is exacerbated when considering that certain individuals may purposefully misappropriate, or simply misinterpret, data to support their argument or policies. Politicians are presently considered the least trusted professionals by the UK public, highlighting the importance of better and more effective communication between the scientific community, policymakers and the populations affected by policy decisions.

Acknowledging limitations

While measures can and should be built in to improve the transparency and robustness of scientific models in order to counteract these common criticisms, it is important to acknowledge that there are limitations to the steps that can be taken. This is particularly the case when dealing with predictions of future events, which inherently involve degrees of uncertainty that cannot be fully accounted for by human or machine. As a result, if not carefully considered and communicated, the increased use of complex modelling in policymaking holds the potential to undermine and obfuscate the policymaking process, which may contribute towards significant mistakes being made, increased uncertainty, lack of trust in the models and in the political process and further disaffection of citizens.

The potential contribution of complexity modelling to the work of policymakers is undeniable. However, it is imperative to appreciate the inner workings and limitations of these models, such as the biases that underpin their functioning and the uncertainties that they will not be fully capable of accounting for, in spite of their immense power. They must be tested against the data, again and again, as new information becomes available or there is a risk of scientific models becoming embroiled in partisan politicization and potentially weaponized for political purposes. It is therefore important not to consider these models as oracles, but instead as one of many contributions to the process of policymaking.




x model

Predictions and Policymaking: Complex Modelling Beyond COVID-19

1 April 2020

Yasmin Afina

Research Assistant, International Security Programme

Calum Inverarity

Research Analyst and Coordinator, International Security Programme
The COVID-19 pandemic has highlighted the potential of complex systems modelling for policymaking but it is crucial to also understand its limitations.

GettyImages-1208425931.jpg

A member of the media wearing a protective face mask works in Downing Street where Britain's Prime Minister Boris Johnson is self-isolating in central London, 27 March 2020. Photo by TOLGA AKMEN/AFP via Getty Images.

Complex systems models have played a significant role in informing and shaping the public health measures adopted by governments in the context of the COVID-19 pandemic. For instance, modelling carried out by a team at Imperial College London is widely reported to have driven the approach in the UK from a strategy of mitigation to one of suppression.

Complex systems modelling will increasingly feed into policymaking by predicting a range of potential correlations, results and outcomes based on a set of parameters, assumptions, data and pre-defined interactions. It is already instrumental in developing risk mitigation and resilience measures to address and prepare for existential crises such as pandemics, prospects of a nuclear war, as well as climate change.

The human factor

In the end, model-driven approaches must stand up to the test of real-life data. Modelling for policymaking must take into account a number of caveats and limitations. Models are developed to help answer specific questions, and their predictions will depend on the hypotheses and definitions set by the modellers, which are subject to their individual and collective biases and assumptions. For instance, the models developed by Imperial College came with the caveated assumption that a policy of social distancing for people over 70 will have a 75 per cent compliance rate. This assumption is based on the modellers’ own perceptions of demographics and society, and may not reflect all societal factors that could impact this compliance rate in real life, such as gender, age, ethnicity, genetic diversity, economic stability, as well as access to food, supplies and healthcare. This is why modelling benefits from a cognitively diverse team who bring a wide range of knowledge and understanding to the early creation of a model.

The potential of artificial intelligence

Machine learning, or artificial intelligence (AI), has the potential to advance the capacity and accuracy of modelling techniques by identifying new patterns and interactions, and overcoming some of the limitations resulting from human assumptions and bias. Yet, increasing reliance on these techniques raises the issue of explainability. Policymakers need to be fully aware and understand the model, assumptions and input data behind any predictions and must be able to communicate this aspect of modelling in order to uphold democratic accountability and transparency in public decision-making.

In addition, models using machine learning techniques require extensive amounts of data, which must also be of high quality and as free from bias as possible to ensure accuracy and address the issues at stake. Although technology may be used in the process (i.e. automated extraction and processing of information with big data), data is ultimately created, collected, aggregated and analysed by and for human users. Datasets will reflect the individual and collective biases and assumptions of those creating, collecting, processing and analysing this data. Algorithmic bias is inevitable, and it is essential that policy- and decision-makers are fully aware of how reliable the systems are, as well as their potential social implications.

The age of distrust

Increasing use of emerging technologies for data- and evidence-based policymaking is taking place, paradoxically, in an era of growing mistrust towards expertise and experts, as infamously surmised by Michael Gove. Policymakers and subject-matter experts have faced increased public scrutiny of their findings and the resultant policies that they have been used to justify.

This distrust and scepticism within public discourse has only been fuelled by an ever-increasing availability of diffuse sources of information, not all of which are verifiable and robust. This has caused tension between experts, policymakers and public, which has led to conflicts and uncertainty over what data and predictions can be trusted, and to what degree. This dynamic is exacerbated when considering that certain individuals may purposefully misappropriate, or simply misinterpret, data to support their argument or policies. Politicians are presently considered the least trusted professionals by the UK public, highlighting the importance of better and more effective communication between the scientific community, policymakers and the populations affected by policy decisions.

Acknowledging limitations

While measures can and should be built in to improve the transparency and robustness of scientific models in order to counteract these common criticisms, it is important to acknowledge that there are limitations to the steps that can be taken. This is particularly the case when dealing with predictions of future events, which inherently involve degrees of uncertainty that cannot be fully accounted for by human or machine. As a result, if not carefully considered and communicated, the increased use of complex modelling in policymaking holds the potential to undermine and obfuscate the policymaking process, which may contribute towards significant mistakes being made, increased uncertainty, lack of trust in the models and in the political process and further disaffection of citizens.

The potential contribution of complexity modelling to the work of policymakers is undeniable. However, it is imperative to appreciate the inner workings and limitations of these models, such as the biases that underpin their functioning and the uncertainties that they will not be fully capable of accounting for, in spite of their immense power. They must be tested against the data, again and again, as new information becomes available or there is a risk of scientific models becoming embroiled in partisan politicization and potentially weaponized for political purposes. It is therefore important not to consider these models as oracles, but instead as one of many contributions to the process of policymaking.




x model

Predictions and Policymaking: Complex Modelling Beyond COVID-19

1 April 2020

Yasmin Afina

Research Assistant, International Security Programme

Calum Inverarity

Research Analyst and Coordinator, International Security Programme
The COVID-19 pandemic has highlighted the potential of complex systems modelling for policymaking but it is crucial to also understand its limitations.

GettyImages-1208425931.jpg

A member of the media wearing a protective face mask works in Downing Street where Britain's Prime Minister Boris Johnson is self-isolating in central London, 27 March 2020. Photo by TOLGA AKMEN/AFP via Getty Images.

Complex systems models have played a significant role in informing and shaping the public health measures adopted by governments in the context of the COVID-19 pandemic. For instance, modelling carried out by a team at Imperial College London is widely reported to have driven the approach in the UK from a strategy of mitigation to one of suppression.

Complex systems modelling will increasingly feed into policymaking by predicting a range of potential correlations, results and outcomes based on a set of parameters, assumptions, data and pre-defined interactions. It is already instrumental in developing risk mitigation and resilience measures to address and prepare for existential crises such as pandemics, prospects of a nuclear war, as well as climate change.

The human factor

In the end, model-driven approaches must stand up to the test of real-life data. Modelling for policymaking must take into account a number of caveats and limitations. Models are developed to help answer specific questions, and their predictions will depend on the hypotheses and definitions set by the modellers, which are subject to their individual and collective biases and assumptions. For instance, the models developed by Imperial College came with the caveated assumption that a policy of social distancing for people over 70 will have a 75 per cent compliance rate. This assumption is based on the modellers’ own perceptions of demographics and society, and may not reflect all societal factors that could impact this compliance rate in real life, such as gender, age, ethnicity, genetic diversity, economic stability, as well as access to food, supplies and healthcare. This is why modelling benefits from a cognitively diverse team who bring a wide range of knowledge and understanding to the early creation of a model.

The potential of artificial intelligence

Machine learning, or artificial intelligence (AI), has the potential to advance the capacity and accuracy of modelling techniques by identifying new patterns and interactions, and overcoming some of the limitations resulting from human assumptions and bias. Yet, increasing reliance on these techniques raises the issue of explainability. Policymakers need to be fully aware and understand the model, assumptions and input data behind any predictions and must be able to communicate this aspect of modelling in order to uphold democratic accountability and transparency in public decision-making.

In addition, models using machine learning techniques require extensive amounts of data, which must also be of high quality and as free from bias as possible to ensure accuracy and address the issues at stake. Although technology may be used in the process (i.e. automated extraction and processing of information with big data), data is ultimately created, collected, aggregated and analysed by and for human users. Datasets will reflect the individual and collective biases and assumptions of those creating, collecting, processing and analysing this data. Algorithmic bias is inevitable, and it is essential that policy- and decision-makers are fully aware of how reliable the systems are, as well as their potential social implications.

The age of distrust

Increasing use of emerging technologies for data- and evidence-based policymaking is taking place, paradoxically, in an era of growing mistrust towards expertise and experts, as infamously surmised by Michael Gove. Policymakers and subject-matter experts have faced increased public scrutiny of their findings and the resultant policies that they have been used to justify.

This distrust and scepticism within public discourse has only been fuelled by an ever-increasing availability of diffuse sources of information, not all of which are verifiable and robust. This has caused tension between experts, policymakers and public, which has led to conflicts and uncertainty over what data and predictions can be trusted, and to what degree. This dynamic is exacerbated when considering that certain individuals may purposefully misappropriate, or simply misinterpret, data to support their argument or policies. Politicians are presently considered the least trusted professionals by the UK public, highlighting the importance of better and more effective communication between the scientific community, policymakers and the populations affected by policy decisions.

Acknowledging limitations

While measures can and should be built in to improve the transparency and robustness of scientific models in order to counteract these common criticisms, it is important to acknowledge that there are limitations to the steps that can be taken. This is particularly the case when dealing with predictions of future events, which inherently involve degrees of uncertainty that cannot be fully accounted for by human or machine. As a result, if not carefully considered and communicated, the increased use of complex modelling in policymaking holds the potential to undermine and obfuscate the policymaking process, which may contribute towards significant mistakes being made, increased uncertainty, lack of trust in the models and in the political process and further disaffection of citizens.

The potential contribution of complexity modelling to the work of policymakers is undeniable. However, it is imperative to appreciate the inner workings and limitations of these models, such as the biases that underpin their functioning and the uncertainties that they will not be fully capable of accounting for, in spite of their immense power. They must be tested against the data, again and again, as new information becomes available or there is a risk of scientific models becoming embroiled in partisan politicization and potentially weaponized for political purposes. It is therefore important not to consider these models as oracles, but instead as one of many contributions to the process of policymaking.




x model

Efficient estimation in single index models through smoothing splines

Arun K. Kuchibhotla, Rohit K. Patra.

Source: Bernoulli, Volume 26, Number 2, 1587--1618.

Abstract:
We consider estimation and inference in a single index regression model with an unknown but smooth link function. In contrast to the standard approach of using kernels or regression splines, we use smoothing splines to estimate the smooth link function. We develop a method to compute the penalized least squares estimators (PLSEs) of the parametric and the nonparametric components given independent and identically distributed (i.i.d.) data. We prove the consistency and find the rates of convergence of the estimators. We establish asymptotic normality under mild assumption and prove asymptotic efficiency of the parametric component under homoscedastic errors. A finite sample simulation corroborates our asymptotic theory. We also analyze a car mileage data set and a Ozone concentration data set. The identifiability and existence of the PLSEs are also investigated.




x model

Creating a Marketing Mix Model for a Better Marketing Budget: Analytics Corner

Using R programming, marketers can create a marketing mix model to determine how sustainable their audience channels are, and make better ad spend decisions. Here's how




x model

Laverne Cox models purple hair that complements her strapless yellow gown at an event in Florida

Laverne Cox made a bold entrance in a bright strapless yellow gown with purple hair at the Rosen Shingle Creek Hotel in Florida on Sunday.




x model

CSS3 Flexible Box Model…Layout Coolness…also Oddities & Confusion

In August, due to a twitter discussion with Molly, and of course while partying on a Saturday night, Dave Gregory and I were looking at whether the Flexible box layout module (still a working draft) is getting close to ready for prime time yet. Our hope was that it will solve some of the frustrations [...]




x model

Matrix models of string theory / Badis Ydri

Online Resource