ala

Egypt: a building designated as the palace of Alexander the Great. Coloured engraving, 17--.

A Paris (rue St Jacques a l'Hotel Saumur) : chez Mondhare, [between 1760 and 1792]




ala

A Burmese family seated in front of a palace, with women and children. Colour process print after Sayo Myo.

[1905?]




ala

Tibet: worshippers in a temple kneeling before statues of the Dalai Lama and the deity Manippe. Engraving by N. Parr, 17--, after J. Grueber.

[London?], [between 1700 and 1799?]




ala

Tibet: the King of Tangut and the Dalai Lama. Engraving by N. Parr, 17--, after J. Grueber.

[London?], [between 1700 and 1799?]




ala

Penguins may face ‘tough decisions’ with goalies thanks to salary cap crunch

Could they lose one or more of Murray, Jarry, and DeSmith?




ala

Health hazards of nitrite inhalants / editors, Harry W. Haverkos, John A. Dougherty.

Rockville, Maryland : National Institute on Drug Abuse, 1988.




ala

Inhalant use and treatment / by Terry Mason.

Rockville, Maryland : National Institute on Drug Abuse, 1979.




ala

Former Alabama prep star Davenport transfers to Georgia

Maori Davenport, who drew national attention over an eligibility dispute during her senior year of high school, is transferring to Georgia after playing sparingly in her lone season at Rutgers. Lady Bulldogs coach Joni Taylor announced Davenport's decision Wednesday. The 6-foot-4 center from Troy, Alabama will have to sit out a season under NCAA transfer rules before she is eligible to join Georgia in 2021-22.




ala

On Mahalanobis Distance in Functional Settings

Mahalanobis distance is a classical tool in multivariate analysis. We suggest here an extension of this concept to the case of functional data. More precisely, the proposed definition concerns those statistical problems where the sample data are real functions defined on a compact interval of the real line. The obvious difficulty for such a functional extension is the non-invertibility of the covariance operator in infinite-dimensional cases. Unlike other recent proposals, our definition is suggested and motivated in terms of the Reproducing Kernel Hilbert Space (RKHS) associated with the stochastic process that generates the data. The proposed distance is a true metric; it depends on a unique real smoothing parameter which is fully motivated in RKHS terms. Moreover, it shares some properties of its finite dimensional counterpart: it is invariant under isometries, it can be consistently estimated from the data and its sampling distribution is known under Gaussian models. An empirical study for two statistical applications, outliers detection and binary classification, is included. The results are quite competitive when compared to other recent proposals in the literature.




ala

On the consistency of graph-based Bayesian semi-supervised learning and the scalability of sampling algorithms

This paper considers a Bayesian approach to graph-based semi-supervised learning. We show that if the graph parameters are suitably scaled, the graph-posteriors converge to a continuum limit as the size of the unlabeled data set grows. This consistency result has profound algorithmic implications: we prove that when consistency holds, carefully designed Markov chain Monte Carlo algorithms have a uniform spectral gap, independent of the number of unlabeled inputs. Numerical experiments illustrate and complement the theory.




ala

(1 + epsilon)-class Classification: an Anomaly Detection Method for Highly Imbalanced or Incomplete Data Sets

Anomaly detection is not an easy problem since distribution of anomalous samples is unknown a priori. We explore a novel method that gives a trade-off possibility between one-class and two-class approaches, and leads to a better performance on anomaly detection problems with small or non-representative anomalous samples. The method is evaluated using several data sets and compared to a set of conventional one-class and two-class approaches.




ala

Scalable Approximate MCMC Algorithms for the Horseshoe Prior

The horseshoe prior is frequently employed in Bayesian analysis of high-dimensional models, and has been shown to achieve minimax optimal risk properties when the truth is sparse. While optimization-based algorithms for the extremely popular Lasso and elastic net procedures can scale to dimension in the hundreds of thousands, algorithms for the horseshoe that use Markov chain Monte Carlo (MCMC) for computation are limited to problems an order of magnitude smaller. This is due to high computational cost per step and growth of the variance of time-averaging estimators as a function of dimension. We propose two new MCMC algorithms for computation in these models that have significantly improved performance compared to existing alternatives. One of the algorithms also approximates an expensive matrix product to give orders of magnitude speedup in high-dimensional applications. We prove guarantees for the accuracy of the approximate algorithm, and show that gradually decreasing the approximation error as the chain extends results in an exact algorithm. The scalability of the algorithm is illustrated in simulations with problem size as large as $N=5,000$ observations and $p=50,000$ predictors, and an application to a genome-wide association study with $N=2,267$ and $p=98,385$. The empirical results also show that the new algorithm yields estimates with lower mean squared error, intervals with better coverage, and elucidates features of the posterior that were often missed by previous algorithms in high dimensions, including bimodality of posterior marginals indicating uncertainty about which covariates belong in the model.




ala

Keeping the balance—Bridge sampling for marginal likelihood estimation in finite mixture, mixture of experts and Markov mixture models

Sylvia Frühwirth-Schnatter.

Source: Brazilian Journal of Probability and Statistics, Volume 33, Number 4, 706--733.

Abstract:
Finite mixture models and their extensions to Markov mixture and mixture of experts models are very popular in analysing data of various kind. A challenge for these models is choosing the number of components based on marginal likelihoods. The present paper suggests two innovative, generic bridge sampling estimators of the marginal likelihood that are based on constructing balanced importance densities from the conditional densities arising during Gibbs sampling. The full permutation bridge sampling estimator is derived from considering all possible permutations of the mixture labels for a subset of these densities. For the double random permutation bridge sampling estimator, two levels of random permutations are applied, first to permute the labels of the MCMC draws and second to randomly permute the labels of the conditional densities arising during Gibbs sampling. Various applications show very good performance of these estimators in comparison to importance and to reciprocal importance sampling estimators derived from the same importance densities.




ala

Reclaiming indigenous governance : reflections and insights from Australia, Canada, New Zealand, and the United States

9780816539970 (paperback)




ala

Scalar-on-function regression for predicting distal outcomes from intensively gathered longitudinal data: Interpretability for applied scientists

John J. Dziak, Donna L. Coffman, Matthew Reimherr, Justin Petrovich, Runze Li, Saul Shiffman, Mariya P. Shiyko.

Source: Statistics Surveys, Volume 13, 150--180.

Abstract:
Researchers are sometimes interested in predicting a distal or external outcome (such as smoking cessation at follow-up) from the trajectory of an intensively recorded longitudinal variable (such as urge to smoke). This can be done in a semiparametric way via scalar-on-function regression. However, the resulting fitted coefficient regression function requires special care for correct interpretation, as it represents the joint relationship of time points to the outcome, rather than a marginal or cross-sectional relationship. We provide practical guidelines, based on experience with scientific applications, for helping practitioners interpret their results and illustrate these ideas using data from a smoking cessation study.




ala

Domain Adaptation in Highly Imbalanced and Overlapping Datasets. (arXiv:2005.03585v1 [cs.LG])

In many Machine Learning domains, datasets are characterized by highly imbalanced and overlapping classes. Particularly in the medical domain, a specific list of symptoms can be labeled as one of various different conditions. Some of these conditions may be more prevalent than others by several orders of magnitude. Here we present a novel unsupervised Domain Adaptation scheme for such datasets. The scheme, based on a specific type of Quantification, is designed to work under both label and conditional shifts. It is demonstrated on datasets generated from Electronic Health Records and provides high quality results for both Quantification and Domain Adaptation in very challenging scenarios. Potential benefits of using this scheme in the current COVID-19 outbreak, for estimation of prevalence and probability of infection, are discussed.




ala

Predictive Modeling of ICU Healthcare-Associated Infections from Imbalanced Data. Using Ensembles and a Clustering-Based Undersampling Approach. (arXiv:2005.03582v1 [cs.LG])

Early detection of patients vulnerable to infections acquired in the hospital environment is a challenge in current health systems given the impact that such infections have on patient mortality and healthcare costs. This work is focused on both the identification of risk factors and the prediction of healthcare-associated infections in intensive-care units by means of machine-learning methods. The aim is to support decision making addressed at reducing the incidence rate of infections. In this field, it is necessary to deal with the problem of building reliable classifiers from imbalanced datasets. We propose a clustering-based undersampling strategy to be used in combination with ensemble classifiers. A comparative study with data from 4616 patients was conducted in order to validate our proposal. We applied several single and ensemble classifiers both to the original dataset and to data preprocessed by means of different resampling methods. The results were analyzed by means of classic and recent metrics specifically designed for imbalanced data classification. They revealed that the proposal is more efficient in comparison with other approaches.




ala

On unbalanced data and common shock models in stochastic loss reserving. (arXiv:2005.03500v1 [q-fin.RM])

Introducing common shocks is a popular dependence modelling approach, with some recent applications in loss reserving. The main advantage of this approach is the ability to capture structural dependence coming from known relationships. In addition, it helps with the parsimonious construction of correlation matrices of large dimensions. However, complications arise in the presence of "unbalanced data", that is, when (expected) magnitude of observations over a single triangle, or between triangles, can vary substantially. Specifically, if a single common shock is applied to all of these cells, it can contribute insignificantly to the larger values and/or swamp the smaller ones, unless careful adjustments are made. This problem is further complicated in applications involving negative claim amounts. In this paper, we address this problem in the loss reserving context using a common shock Tweedie approach for unbalanced data. We show that the solution not only provides a much better balance of the common shock proportions relative to the unbalanced data, but it is also parsimonious. Finally, the common shock Tweedie model also provides distributional tractability.




ala

On a computationally-scalable sparse formulation of the multidimensional and non-stationary maximum entropy principle. (arXiv:2005.03253v1 [stat.CO])

Data-driven modelling and computational predictions based on maximum entropy principle (MaxEnt-principle) aim at finding as-simple-as-possible - but not simpler then necessary - models that allow to avoid the data overfitting problem. We derive a multivariate non-parametric and non-stationary formulation of the MaxEnt-principle and show that its solution can be approximated through a numerical maximisation of the sparse constrained optimization problem with regularization. Application of the resulting algorithm to popular financial benchmarks reveals memoryless models allowing for simple and qualitative descriptions of the major stock market indexes data. We compare the obtained MaxEnt-models to the heteroschedastic models from the computational econometrics (GARCH, GARCH-GJR, MS-GARCH, GARCH-PML4) in terms of the model fit, complexity and prediction quality. We compare the resulting model log-likelihoods, the values of the Bayesian Information Criterion, posterior model probabilities, the quality of the data autocorrelation function fits as well as the Value-at-Risk prediction quality. We show that all of the considered seven major financial benchmark time series (DJI, SPX, FTSE, STOXX, SMI, HSI and N225) are better described by conditionally memoryless MaxEnt-models with nonstationary regime-switching than by the common econometric models with finite memory. This analysis also reveals a sparse network of statistically-significant temporal relations for the positive and negative latent variance changes among different markets. The code is provided for open access.




ala

Multi-Label Sampling based on Local Label Imbalance. (arXiv:2005.03240v1 [cs.LG])

Class imbalance is an inherent characteristic of multi-label data that hinders most multi-label learning methods. One efficient and flexible strategy to deal with this problem is to employ sampling techniques before training a multi-label learning model. Although existing multi-label sampling approaches alleviate the global imbalance of multi-label datasets, it is actually the imbalance level within the local neighbourhood of minority class examples that plays a key role in performance degradation. To address this issue, we propose a novel measure to assess the local label imbalance of multi-label datasets, as well as two multi-label sampling approaches based on the local label imbalance, namely MLSOL and MLUL. By considering all informative labels, MLSOL creates more diverse and better labeled synthetic instances for difficult examples, while MLUL eliminates instances that are harmful to their local region. Experimental results on 13 multi-label datasets demonstrate the effectiveness of the proposed measure and sampling approaches for a variety of evaluation metrics, particularly in the case of an ensemble of classifiers trained on repeated samples of the original data.




ala

Biodiversity of the Himalaya : Jammu and Kashmir State

9789813291744 (electronic bk.)




ala

Sparse high-dimensional regression: Exact scalable algorithms and phase transitions

Dimitris Bertsimas, Bart Van Parys.

Source: The Annals of Statistics, Volume 48, Number 1, 300--323.

Abstract:
We present a novel binary convex reformulation of the sparse regression problem that constitutes a new duality perspective. We devise a new cutting plane method and provide evidence that it can solve to provable optimality the sparse regression problem for sample sizes $n$ and number of regressors $p$ in the 100,000s, that is, two orders of magnitude better than the current state of the art, in seconds. The ability to solve the problem for very high dimensions allows us to observe new phase transition phenomena. Contrary to traditional complexity theory which suggests that the difficulty of a problem increases with problem size, the sparse regression problem has the property that as the number of samples $n$ increases the problem becomes easier in that the solution recovers 100% of the true signal, and our approach solves the problem extremely fast (in fact faster than Lasso), while for small number of samples $n$, our approach takes a larger amount of time to solve the problem, but importantly the optimal solution provides a statistically more relevant regressor. We argue that our exact sparse regression approach presents a superior alternative over heuristic methods available at present.




ala

Scalable high-resolution forecasting of sparse spatiotemporal events with kernel methods: A winning solution to the NIJ “Real-Time Crime Forecasting Challenge”

Seth Flaxman, Michael Chirico, Pau Pereira, Charles Loeffler.

Source: The Annals of Applied Statistics, Volume 13, Number 4, 2564--2585.

Abstract:
We propose a generic spatiotemporal event forecasting method which we developed for the National Institute of Justice’s (NIJ) Real-Time Crime Forecasting Challenge (National Institute of Justice (2017)). Our method is a spatiotemporal forecasting model combining scalable randomized Reproducing Kernel Hilbert Space (RKHS) methods for approximating Gaussian processes with autoregressive smoothing kernels in a regularized supervised learning framework. While the smoothing kernels capture the two main approaches in current use in the field of crime forecasting, kernel density estimation (KDE) and self-exciting point process (SEPP) models, the RKHS component of the model can be understood as an approximation to the popular log-Gaussian Cox Process model. For inference, we discretize the spatiotemporal point pattern and learn a log-intensity function using the Poisson likelihood and highly efficient gradient-based optimization methods. Model hyperparameters including quality of RKHS approximation, spatial and temporal kernel lengthscales, number of autoregressive lags and bandwidths for smoothing kernels as well as cell shape, size and rotation, were learned using cross validation. Resulting predictions significantly exceeded baseline KDE estimates and SEPP models for sparse events.




ala

The classification permutation test: A flexible approach to testing for covariate imbalance in observational studies

Johann Gagnon-Bartsch, Yotam Shem-Tov.

Source: The Annals of Applied Statistics, Volume 13, Number 3, 1464--1483.

Abstract:
The gold standard for identifying causal relationships is a randomized controlled experiment. In many applications in the social sciences and medicine, the researcher does not control the assignment mechanism and instead may rely upon natural experiments or matching methods as a substitute to experimental randomization. The standard testable implication of random assignment is covariate balance between the treated and control units. Covariate balance is commonly used to validate the claim of as good as random assignment. We propose a new nonparametric test of covariate balance. Our Classification Permutation Test (CPT) is based on a combination of classification methods (e.g., random forests) with Fisherian permutation inference. We revisit four real data examples and present Monte Carlo power simulations to demonstrate the applicability of the CPT relative to other nonparametric tests of equality of multivariate distributions.




ala

Around the entropic Talagrand inequality

Giovanni Conforti, Luigia Ripani.

Source: Bernoulli, Volume 26, Number 2, 1431--1452.

Abstract:
In this article, we study generalization of the classical Talagrand transport-entropy inequality in which the Wasserstein distance is replaced by the entropic transportation cost. This class of inequalities has been introduced in the recent work ( Probab. Theory Related Fields 174 (2019) 1–47), in connection with the study of Schrödinger bridges. We provide several equivalent characterizations in terms of reverse hypercontractivity for the heat semigroup, contractivity of the Hamilton–Jacobi–Bellman semigroup and dimension-free concentration of measure. Properties such as tensorization and relations to other functional inequalities are also investigated. In particular, we show that the inequalities studied in this article are implied by a Logarithmic Sobolev inequality and imply Talagrand inequality.




ala

Item 01: Notebooks (2) containing hand written copies of 123 letters from Major William Alan Audsley to his parents, ca. 1916-ca. 1919, transcribed by his father. Also includes original letters (2) written by Major Audsley.




ala

New Zealand says it backs Taiwan's role in WHO due to success with coronavirus




ala

Scalable Bayesian Inference for the Inverse Temperature of a Hidden Potts Model

Matthew Moores, Geoff Nicholls, Anthony Pettitt, Kerrie Mengersen.

Source: Bayesian Analysis, Volume 15, Number 1, 1--27.

Abstract:
The inverse temperature parameter of the Potts model governs the strength of spatial cohesion and therefore has a major influence over the resulting model fit. A difficulty arises from the dependence of an intractable normalising constant on the value of this parameter and thus there is no closed-form solution for sampling from the posterior distribution directly. There is a variety of computational approaches for sampling from the posterior without evaluating the normalising constant, including the exchange algorithm and approximate Bayesian computation (ABC). A serious drawback of these algorithms is that they do not scale well for models with a large state space, such as images with a million or more pixels. We introduce a parametric surrogate model, which approximates the score function using an integral curve. Our surrogate model incorporates known properties of the likelihood, such as heteroskedasticity and critical temperature. We demonstrate this method using synthetic data as well as remotely-sensed imagery from the Landsat-8 satellite. We achieve up to a hundredfold improvement in the elapsed runtime, compared to the exchange algorithm or ABC. An open-source implementation of our algorithm is available in the R package bayesImageS .




ala

Dila jalana = Heart burn. / design : Biman Mullick.

London : Cleanair (33 Stillness Rd, London, SE23 1NG), [198-?]




ala

Shoppers Swear These $30 Colorfulkoala Leggings Are the Ultimate Lululemon Dupes

And they’re available in 19 fun prints.




ala

The Cognitive Thalamus as a Gateway to Mental Representations

Mathieu Wolff
Jan 2, 2019; 39:3-14
Viewpoints




ala

Circuit Stability to Perturbations Reveals Hidden Variability in the Balance of Intrinsic and Synaptic Conductances

Sebastian Onasch
Apr 15, 2020; 40:3186-3202
Systems/Circuits




ala

El Comité de Basilea finaliza sus principios sobre pruebas de tensión, analiza fórmulas para acabar con prácticas de arbitraje regulatorio, aprueba la lista anual de G-SIB y debate sobre el coeficiente de apalancamiento, los criptoacti

Spanish translation of press release - the Basel Committee on Banking Supervision is finalising stress-testing principles, reviews ways to stop regulatory arbitrage behaviour, agrees on annual G-SIB list, discusses leverage ratio, crypto-assets, market risk framework and implementation, 20 September 2018.




ala

kimi kim jalan jalan episode 2




ala

kimi kim jalan jalan




ala

if i were you ill go to the palace




ala

A Scientist Salarian - :milkie:




ala

Carl Paladino vs. the blacks




ala

Carl Paladino vs. his aunt




ala

Alaska Native Sisterhood civil rights leader Amy Hallingstad--a glimpse to 1947




ala

SHI to sponsor lecture on totem parks of Southeast Alaska




ala

Contribution of NPY Y5 Receptors to the Reversible Structural Remodeling of Basolateral Amygdala Dendrites in Male Rats Associated with NPY-Mediated Stress Resilience

Endogenous neuropeptide Y (NPY) and corticotrophin-releasing factor (CRF) modulate the responses of the basolateral amygdala (BLA) to stress and are associated with the development of stress resilience and vulnerability, respectively. We characterized persistent effects of repeated NPY and CRF treatment on the structure and function of BLA principal neurons in a novel organotypic slice culture (OTC) model of male rat BLA, and examined the contributions of specific NPY receptor subtypes to these neural and behavioral effects. In BLA principal neurons within the OTCs, repeated NPY treatment caused persistent attenuation of excitatory input and induced dendritic hypotrophy via Y5 receptor activation; conversely, CRF increased excitatory input and induced hypertrophy of BLA principal neurons. Repeated treatment of OTCs with NPY followed by an identical treatment with CRF, or vice versa, inhibited or reversed all structural changes in OTCs. These structural responses to NPY or CRF required calcineurin or CaMKII, respectively. Finally, repeated intra-BLA injections of NPY or a Y5 receptor agonist increased social interaction, a validated behavior for anxiety, and recapitulated structural changes in BLA neurons seen in OTCs, while a Y5 receptor antagonist prevented NPY's effects both on behavior and on structure. These results implicate the Y5 receptor in the long-term, anxiolytic-like effects of NPY in the BLA, consistent with an intrinsic role in stress buffering, and highlight a remarkable mechanism by which BLA neurons may adapt to different levels of stress. Moreover, BLA OTCs offer a robust model to study mechanisms associated with resilience and vulnerability to stress in BLA.

SIGNIFICANCE STATEMENT Within the basolateral amygdala (BLA), neuropeptide Y (NPY) is associated with buffering the neural stress response induced by corticotropin releasing factor, and promoting stress resilience. We used a novel organotypic slice culture model of BLA, complemented with in vivo studies, to examine the cellular mechanisms associated with the actions of NPY. In organotypic slice cultures, repeated NPY treatment reduces the complexity of the dendritic extent of anxiogenic BLA principal neurons, making them less excitable. NPY, via activation of Y5 receptors, additionally inhibits and reverses the increases in dendritic extent and excitability induced by the stress hormone, corticotropin releasing factor. This NPY-mediated neuroplasticity indicates that resilience or vulnerability to stress may thus involve neuropeptide-mediated dendritic remodeling in BLA principal neurons.




ala

Circuit Stability to Perturbations Reveals Hidden Variability in the Balance of Intrinsic and Synaptic Conductances

Neurons and circuits each with a distinct balance of intrinsic and synaptic conductances can generate similar behavior but sometimes respond very differently to perturbation. Examining a large family of circuit models with non-identical neurons and synapses underlying rhythmic behavior, we analyzed the circuits' response to modifications in single and multiple intrinsic conductances in the individual neurons. To summarize these changes over the entire range of perturbed parameters, we quantified circuit output by defining a global stability measure. Using this measure, we identified specific subsets of conductances that when perturbed generate similar behavior in diverse individuals of the population. Our unbiased clustering analysis enabled us to quantify circuit stability when simultaneously perturbing multiple conductances as a nonlinear combination of single conductance perturbations. This revealed surprising conductance combinations that can predict the response to specific perturbations, even when the remaining intrinsic and synaptic conductances are unknown. Therefore, our approach can expose hidden variability in the balance of intrinsic and synaptic conductances of the same neurons across different versions of the same circuit solely from the circuit response to perturbations. Developed for a specific family of model circuits, our quantitative approach to characterizing high-dimensional degenerate systems provides a conceptual and analytic framework to guide future theoretical and experimental studies on degeneracy and robustness.

SIGNIFICANCE STATEMENT Neural circuits can generate nearly identical behavior despite neuronal and synaptic parameters varying several-fold between individual instantiations. Yet, when these parameters are perturbed through channel deletions and mutations or environmental disturbances, seemingly identical circuits can respond very differently. What distinguishes inconsequential perturbations that barely alter circuit behavior from disruptive perturbations that drastically disturb circuit output remains unclear. Focusing on a family of rhythmic circuits, we propose a computational approach to reveal hidden variability in the intrinsic and synaptic conductances in seemingly identical circuits based solely on circuit output to different perturbations. We uncover specific conductance combinations that work similarly to maintain stability and predict the effect of changing multiple conductances simultaneously, which often results from neuromodulation or injury.




ala

Neurog2 Acts as a Classical Proneural Gene in the Ventromedial Hypothalamus and Is Required for the Early Phase of Neurogenesis

The tuberal hypothalamus is comprised of the dorsomedial, ventromedial, and arcuate nuclei, as well as parts of the lateral hypothalamic area, and it governs a wide range of physiologies. During neurogenesis, tuberal hypothalamic neurons are thought to be born in a dorsal-to-ventral and outside-in pattern, although the accuracy of this description has been questioned over the years. Moreover, the intrinsic factors that control the timing of neurogenesis in this region are poorly characterized. Proneural genes, including Achate-scute-like 1 (Ascl1) and Neurogenin 3 (Neurog3) are widely expressed in hypothalamic progenitors and contribute to lineage commitment and subtype-specific neuronal identifies, but the potential role of Neurogenin 2 (Neurog2) remains unexplored. Birthdating in male and female mice showed that tuberal hypothalamic neurogenesis begins as early as E9.5 in the lateral hypothalamic and arcuate and rapidly expands to dorsomedial and ventromedial neurons by E10.5, peaking throughout the region by E11.5. We confirmed an outside-in trend, except for neurons born at E9.5, and uncovered a rostrocaudal progression but did not confirm a dorsal-ventral patterning to tuberal hypothalamic neuronal birth. In the absence of Neurog2, neurogenesis stalls, with a significant reduction in early-born BrdU+ cells but no change at later time points. Further, the loss of Ascl1 yielded a similar delay in neuronal birth, suggesting that Ascl1 cannot rescue the loss of Neurog2 and that these proneural genes act independently in the tuberal hypothalamus. Together, our findings show that Neurog2 functions as a classical proneural gene to regulate the temporal progression of tuberal hypothalamic neurogenesis.

SIGNIFICANCE STATEMENT Here, we investigated the general timing and pattern of neurogenesis within the tuberal hypothalamus. Our results confirmed an outside-in trend of neurogenesis and uncovered a rostrocaudal progression. We also showed that Neurog2 acts as a classical proneural gene and is responsible for regulating the birth of early-born neurons within the ventromedial hypothalamus, acting independently of Ascl1. In addition, we revealed a role for Neurog2 in cell fate specification and differentiation of ventromedial -specific neurons. Last, Neurog2 does not have cross-inhibitory effects on Neurog1, Neurog3, and Ascl1. These findings are the first to reveal a role for Neurog2 in hypothalamic development.




ala

No boats needed for a Guatemalan fishing community

Imagine living in one of the driest areas on the planet. What little rain there is falls over the space of a few months, yielding around 700 mm in total each year. A population of 1.2 million has to survive on 65 percent less water than the rest of their compatriots, on a traditional staple diet of corn and beans. [...]




ala

Meet the New Species of Snake Named After Salazar Slytherin of the Harry Potter Franchise

Perhaps the fictional Hogwarts founder would have appreciated the honor




ala

Nostalgic for the North? Take a Virtual Dogsled Ride in Fairbanks, Alaska

Armchair travelers can also enjoy 360-degree views of the city's famed Northern Lights




ala

Plan proposes $18.7M in funds for AMHS: Alaska House subcommittee advances plan to restore minimal service




ala

Hundreds honor Alaska Native rights icon Peratrovich




ala

Escalator

Escalator