AUTHOREA
Log in Sign Up Browse Preprints
LOG IN SIGN UP

Preprints

Explore 48,231 preprints on the Authorea Preprint Repository

A preprint on Authorea can be a complete scientific manuscript submitted to a journal, an essay, a whitepaper, or a blog post. Preprints on Authorea can contain datasets, code, figures, interactive visualizations and computational notebooks.
Read more about preprints.

A novel approach to diagnosing Southern Hemisphere planetary wave activity and its in...
Damien Irving
Ian Simmonds

Damien Irving

and 1 more

November 16, 2014
Southern Hemisphere mid-to-upper tropospheric planetary wave activity is characterized by the superposition of two zonally-oriented, quasi-stationary waveforms: zonal wavenumber one (ZW1) and zonal wavenumber three (ZW3). Previous studies have tended to consider these waveforms in isolation and with the exception of those studies relating to sea ice, little is known about their impact on regional climate variability. We take a novel approach to quantifying the combined influence of ZW1 and ZW3, using the strength of the hemispheric meridional flow as a proxy for zonal wave activity. Our methodology adapts the wave envelope construct routinely used in the identification of synoptic-scale Rossby wave packets and improves on existing approaches by allowing for variations in both wave phase and amplitude. While ZW1 and ZW3 are both prominent features of the climatological circulation, the defining feature of highly meridional hemispheric states is an enhancement of the ZW3 component. Composites of the mean surface conditions during these highly meridional, ZW3-like anomalous states (i.e. months of strong planetary wave activity) reveal large sea ice anomalies over the Amundsen and Bellingshausen Seas during autumn and along much of the East Antarctic coastline throughout the year. Large precipitation anomalies in regions of significant topography (e.g. New Zealand, Patagonia, coastal Antarctica) and anomalously warm temperatures over much of the Antarctic continent were also associated with strong planetary wave activity. The latter has potentially important implications for the interpretation of recent warming over West Antarctica and the Antarctic Peninsula.
Satellite Dwarf Galaxies in a Hierarchical Universe: Infall Histories, Group Preproce...
Andrew Wetzel
Alis Deason

Andrew Wetzel

and 2 more

October 31, 2014
In the Local Group, almost all satellite dwarf galaxies that are within the virial radius of the Milky Way (MW) and M31 exhibit strong environmental influence. The orbital histories of these satellites provide the key to understanding the role of the MW/M31 halo, lower-mass groups, and cosmic reionization on the evolution of dwarf galaxies. We examine the virial-infall histories of satellites with $\mstar=10^{3-9} \msun$ using the ELVIS suite of cosmological zoom-in dissipationless simulations of 48 MW/M31-like halos. Satellites at z = 0 fell into the MW/M31 halos typically $5-8 \gyr$ ago at z = 0.5 − 1. However, they first fell into any host halo typically $7-10 \gyr$ ago at z = 0.7 − 1.5. This difference arises because many satellites experienced “group preprocessing” in another host halo, typically of $\mvir \sim 10^{10-12} \msun$, before falling into the MW/M31 halos. Satellites with lower-mass and/or those closer to the MW/M31 fell in earlier and are more likely to have experienced group preprocessing; half of all satellites with $\mstar < 10^6 \msun$ were preprocessed in a group. Infalling groups also drive most satellite-satellite mergers within the MW/M31 halos. Finally, _none_ of the surviving satellites at z = 0 were within the virial radius of their MW/M31 halo during reionization (z > 6), and only <4% were satellites of any other host halo during reionization. Thus, effects of cosmic reionization versus host-halo environment on the formation histories of surviving dwarf galaxies in the Local Group occurred at distinct epochs and are separable in time.
The Victorian Earthquake Hazard Map
Dr. Dan Sandiford
Tim Rawling

Dr. Dan Sandiford

and 2 more

October 07, 2022
SUMMARY This report summarises the development of a new Probabilistic Seismic Hazard Analysis (PSHA) for Victoria called the Victorian Earthquake Hazard Map (VEHM). PSHA provides forecasts of the strength of shaking in any given time (return period). The primary inputs are historical seismicity catalogues, paleoseismic (active fault) data, and ground-motion prediction equations. A key component in the development of the Victorian Earthquake Hazard Map was the integration of new geophysics data derived from deployments of Australian Geophysical Observing System seismometers in Victoria with a variety of publicly available datasets including seismicity catalogues, geophysical imagery and geological mapping. This has resulted in the development of a new dataset that constrains the models presented in the VEHM and is also is provided as a stand-alone resource for both reference and future analysis. The VEHM provides a Victorian-focussed earthquake hazard estimation tool that offers an alternative to the nationally focussed 2012 Australian Earthquake Hazard Map . The major difference between the two maps is the inclusion of active fault location and slip estimates in the VEHM. There is a significant difference in hazard estimation between the two maps (even without including fault-related seismicity) due primarily to differences in seismicity-analysis. These issues are described in the discussion section of this report, again resulting in a higher fidelity result in the VEHM. These differences make the VEHM a more conservative hazard model. The VEHM currently exists as a series of online resources to help assist those in engineering, planning, disaster management. This is a dynamic dataset and the inputs will continue to be refined as new constraints are included and the map is made compatible with the Global Earthquake Model (GEM) software, due for release in late 2014. The VEHM was funded through the Natural Disaster Resilience Grants Scheme. The NDRGS is a grant program funded by the Commonwealth Attorney-General’s Department under the National Partnership Agreement on Natural Disaster Resilience signed by the Prime Minister and Premier. The purpose of the National Partnership Agreement is to contribute towards implementation of the National Strategy for Disaster Resilience, supporting projects leading to the following outcomes: 1. reduced risk from the impact of disasters and 2. appropriate emergency management, including volunteer, capability and capacity consistent with the State’s risk profile.
Distinguishing disorder from order in irreversible decay processes
Jonathan Nichols
Shane Flynn

Jonathan Nichols

and 2 more

August 25, 2014
Fluctuating rate coefficients are necessary when modeling disordered kinetic processes with mass-action rate equations. However, measuring the fluctuations of rate coefficients is a challenge, particularly for nonlinear rate equations. Here we present a measure of the total disorder in irreversible decay i A → products, i = 1, 2, 3, …n governed by (non)linear rate equations – the inequality between the time-integrated square of the rate coefficient (multiplied by the time interval of interest) and the square of the time-integrated rate coefficient. We apply the inequality to empirical models for statically and dynamically disordered kinetics with i ≥ 2. These models serve to demonstrate that the inequality quantifies the cumulative variations in a rate coefficient, and the equality is a bound only satisfied when the rate coefficients are constant in time.
Real-space grids and the Octopus code as tools for the development of new simulation...
Xavier Andrade
David A. Strubbe

Xavier Andrade

and 15 more

August 18, 2014
Real-space grids are a powerful alternative for the simulation of electronic systems. One of the main advantages of the approach is the flexibility and simplicity of working directly in real space where the different fields are discretized on a grid, combined with competitive numerical performance and great potential for parallelization. These properties constitute a great advantage at the time of implementing and testing new physical models. Based on our experience with the Octopus code, in this article we discuss how the real-space approach has allowed for the recent development of new ideas for the simulation of electronic systems. Among these applications are approaches to calculate response properties, modeling of photoemission, optimal control of quantum systems, simulation of plasmonic systems, and the exact solution of the Schrödinger equation for low-dimensionality systems.
The "Paper" of the Future
Alyssa Goodman
Josh Peek

Alyssa Goodman

and 10 more

January 17, 2021
_A 5-minute video demonstration of this paper is available at this YouTube link._ PREAMBLE A variety of research on human cognition demonstrates that humans learn and communicate best when more than one processing system (e.g. visual, auditory, touch) is used. And, related research also shows that, no matter how technical the material, most humans also retain and process information best when they can put a narrative "story" to it. So, when considering the future of scholarly communication, we should be careful not to do blithely away with the linear narrative format that articles and books have followed for centuries: instead, we should enrich it. Much more than text is used to communicate in Science. Figures, which include images, diagrams, graphs, charts, and more, have enriched scholarly articles since the time of Galileo, and ever-growing volumes of data underpin most scientific papers. When scientists communicate face-to-face, as in talks or small discussions, these figures are often the focus of the conversation. In the best discussions, scientists have the ability to manipulate the figures, and to access underlying data, in real-time, so as to test out various what-if scenarios, and to explain findings more clearly. THIS SHORT ARTICLE EXPLAINS—AND SHOWS WITH DEMONSTRATIONS—HOW SCHOLARLY "PAPERS" CAN MORPH INTO LONG-LASTING RICH RECORDS OF SCIENTIFIC DISCOURSE, enriched with deep data and code linkages, interactive figures, audio, video, and commenting.
Compressed Sensing for the Fast Computation of Matrices: Application to Molecular Vib...
Jacob Sanders
Xavier Andrade

Jacob Sanders

and 2 more

July 11, 2014
This article presents a new method to compute matrices from numerical simulations based on the ideas of sparse sampling and compressed sensing. The method is useful for problems where the determination of the entries of a matrix constitutes the computational bottleneck. We apply this new method to an important problem in computational chemistry: the determination of molecular vibrations from electronic structure calculations, where our results show that the overall scaling of the procedure can be improved in some cases. Moreover, our method provides a general framework for bootstrapping cheap low-accuracy calculations in order to reduce the required number of expensive high-accuracy calculations, resulting in a significant 3\(\times\) speed-up in actual calculations.
Large-Scale Microscopic Traffic Behaviour and Safety Analysis of Québec Roundabout De...
Paul St-Aubin
Luis Miranda-Moreno

Paul St-Aubin

and 2 more

July 08, 2014
INTRODUCTION Roundabouts are a relatively new design for intersection traffic management in North America. With great promises from abroad in terms of safety, as well as capacity—roundabouts are a staple of European road design—roundabouts have only recently proliferated in parts of North America, including the province of Québec. However, questions still remain regarding the feasibility of introducing the roundabout to regions where driving culture and road design philosophy differ and where drivers are not habituated to their use. This aspect of road user behaviour integration is crucial for their implementation, for roundabouts manage traffic conflicts passively. In roundabouts, road user interactions and driving conflicts are handled entirely by way of driving etiquette between road users: lane merging, right-of-way, yielding behaviour, and eye contact in the case of vulnerable road users are all at play for successful passage negotiation at a roundabout. This is in contrast with typical North American intersections managed by computer-controlled traffic-light controllers (or on occasion police officers) and traffic circles of all kinds which are also signalized. And while roundabouts share much in common with 4 and 2-way stops, they are frequently used for high-capacity, even high-speed, intersections where 4 and 2-way stops would normally not be justified. Resistance to adoption in some areas is still important, notably on the part of vulnerable road users such as pedestrians and cyclists but also by some drivers too. While a number of European studies cite reductions in accident probability and accident severity, particularly for the Netherlands , Denmark , and Sweden , research on roundabouts in North America is still limited, and even fewer attempts at microscopic behaviour analysis exist anywhere in the world. The latter is important because it provides insight over the inner mechanics of driving behaviour which might be key to tailoring roundabout design for regional adoption and implementation efforts. Fortunately, more systematic and data-rich analysis techniques are being made available today. This paper proposes the application of a novel, video-based, semi-automated trajectory analysis approach for large-scale microscopic behavioural analysis of 20 of 100 available roundabouts in Québec, investigating 37 different roundabout weaving zones. The objectives of this paper are to explore the impact of Québec roundabout design characteristics, their geometry and built environment on driver behaviour and safety through microscopic, video-based trajectory analysis. Driver behaviour is characterized by merging speed and time-to-collision , a maturing indicator of surrogate safety and behaviour analysis in the field of transportation safety. In addition, this work represents one of the largest applications of surrogate safety analysis to date.
Comparison of Various Time-to-Collision Prediction and Aggregation Methods for Surrog...
Paul St-Aubin
Luis Miranda-Moreno

Paul St-Aubin

and 2 more

July 08, 2014
INTRODUCTION Traditional methods of road safety analysis rely on direct road accident observations, data sources which are rare and expensive to collect and which also carry the social cost of placing citizens at risk of unknown danger. Surrogate safety analysis is a growing discipline in the field of road safety analysis that promises a more pro-active approach to road safety diagnosis. This methodology uses non-crash traffic events and measures thereof as predictors of collision probability and severity as they are significantly more frequent, cheaper to collect, and have no social impact. Time-to-collision (TTC) is an example of an indicator that indicates collision probability primarily: the smaller the TTC, the less likely drivers have time to perceive and react before a collision, and thus the higher the probability of a collision outcome. Relative positions and velocities between road users or between a user and obstacles can be characterised by a collision course and the corresponding TTC. Meanwhile, driving speed (absolute speed) is an example of an indicator that measures primarily collision severity. The higher the travelling speed, the more stored kinetic energy is dissipated during a collision impact . Similarly, large speed differentials between road users or with stationary obstacles may also contribute to collision severity, though the TTC depends on relative distance as well. Driving speed is used extensively in stopping-sight distance models , some even suggesting that drivers modulate their emergency braking in response to travel speed . Others content that there is little empirical evidence of a relationship between speed and collision probability . Many surrogate safety methods have been used in the literature, especially recently with the renewal of automated data collection methods, but consistency in the definitions of traffic events and indicators, in their interpretation, and in the transferability of results is still lacking. While a wide diversity of models demonstrates that research in the field is thriving, there remains a need of comparison of the methods and even a methodology for comparison in order to make surrogate safety practical for practitioners. For example, time-to-collision measures collision course events, but the definition of a collision course lacks rigour in the literature. Also lacking is some systematic validation of the different techniques. Some early attempts have been made with the Swedish Traffic Conflict Technique  using trained observers, though more recent attempts across different methodologies, preferably automated and objectively-defined measures, are still needed. Ideally, this would be done with respect to crash data and crash-based safety diagnosis. The second best method is to compare the characteristics of all the methods and their results on the same data set, but public benchmark data is also very limited despite recent efforts . The objectives of this paper are to review the definition and interpretation of one of the most ubiquitous and least context-sensitive surrogate safety indicators, namely time-to-collision, for surrogate safety analysis using i) consistent, recent, and, most importantly, objective definitions of surrogate safety indicators, ii) a very large data set across numerous sites, and iii) the latest developments in automated analysis. This work examines the use of various motion prediction methods, constant velocity, normal adaptation and observed motion patterns, for the TTC safety indicator (for its properties of transferability), and space and time aggregation methods for continuous surrogate safety indicators. This represents an application of surrogate safety analysis to one of the largest data sets to date.
The Fork Factor: an academic impact factor based on reuse.
Ferdinando Pucci
Alberto Pepe

Ferdinando Pucci

and 1 more

December 21, 2020
HOW IS ACADEMIC RESEARCH EVALUATED? There are many different ways to determine the impact of scientific research. One of the oldest and best established measures is to look at the Impact Factor (IF) of the academic journal where the research has been published. The IF is simply the average number of citations to recent articles published in such an academic journal. The IF is important because the reputation of a journal is also used as a proxy to evaluate the relevance of past research performed by a scientist when s/he is applying to a new position or for funding. So, if you are a scientist who publishes in high-impact journals (the big names) you are more likely to get tenure or a research grant. Several criticisms have been made to the use and misuse of the IF. One of these is the policies that academic journal editors adopt to boost the IF of their journal (and get more ads), to the detriment of readers, writers and science at large. Unfortunately, these policies promote the publication of sensational claims by researchers who are in turn rewarded by funding agencies for publishing in high IF journals. This effect is broadly recognized by the scientific community and represents a conflict of interests, that in the long run increases public distrust in published data and slows down scientific discoveries. Scientific discoveries should instead foster new findings through the sharing of high quality scientific data, which feeds back into increasing the pace of scientific breakthroughs. It is apparent that the IF is a crucially deviated player in this situation. To resolve the conflict of interest, it is thus fundamental that funding agents (a major driving force in science) start complementing the IF with a better proxy for the relevance of publishing venues and, in turn, scientists’ work. RESEARCH IMPACT IN THE ERA OF FORKING. A number of alternative metrics for evaluating academic impact are emerging. These include metrics to give scholars credit for sharing of raw science (like datasets and code), semantic publishing, and social media contribution, based not solely on citation but also on usage, social bookmarking, conversations. We, at Authorea, strongly believe that these alternative metrics should and will be a fundamental ingredient of how scholars are evaluated for funding in the future. In fact, Authorea already welcomes data, code, and raw science materials alongside its articles, and is built on an infrastructure (Git) that naturally poses as a framework for distributing, versioning, and tracking those materials. Git is a versioning control platform currently employed by developers for collaborating on source code, and its features perfectly fit the needs of most scientists as well. A versioning system, such as Authorea and GitHub, empowers FORKING of peer-reviewed research data, allowing a colleague of yours to further develop it in a new direction. Forking inherits the history of the work and preserves the value chain of science (i.e., who did what). In other words, forking in science means _standing on the shoulder of giants_ (or soon to be giants) and is equivalent to citing someone else’s work but in a functional manner. Whether it is a “negative” result (we like to call it non-confirmatory result) or not, publishing your peer reviewed research in Authorea will promote forking of your data. (To learn how we plan to implement peer review in the system, please stay tuned for future posts on this blog.) MORE FORKING, MORE IMPACT, HIGHER QUALITY SCIENCE. Obviously, the more of your research data are published, the higher are your chances that they will be forked and used as a basis for groundbreaking work, and in turn, the higher the interest in your work and your academic impact. Whether your projects are data-driven peer reviewed articles on Authorea discussing a new finding, raw datasets detailing some novel findings on Zenodo or Figshare, source code repositories hosted on Github presenting a new statistical package, every bit of your work that can be reused, will be forked and will give you credit. Do you want to do a favor to science? Publish also non-confirmatory results and help your scientific community to quickly spot bad science by publishing a dead end fork (Figure 1).
The effect of carbon subsidies on marine planktonic niche partitioning and recruitmen...
Charles Pepe-Ranney
Ed Hall

Charles Pepe-Ranney

and 1 more

June 16, 2014
INTRODUCTION Biofilms are diverse and complex microbial consortia, and, the biofilm lifestyle is the rule rather than the exception for microbes in many environments. Large and small-scale biofilm architectural features play an important role in their ecology and influence their role in biogeochemical cycles . Fluid mechanics impact biofilm structure and assembly , but it is less clear how other abiotic factors such as resource availability affect biofilm assembly. Aquatic biofilms initiate with seed propagules from the planktonic community . Thus, resource amendments that influence planktonic communities may also influence the recruitment of microbial populations during biofilm community assembly. In a crude sense, biofilm and planktonic microbial communities divide into two key groups: oxygenic phototrophs including eukaryotes and cyanobacteria (hereafter “photoautotrophs”), and heterotrophic bacteria and archaea. This dichotomy, admittedly an abstraction (e.g. non-phototrophs can also be autotrophs), can be a powerful paradigm for understanding community shifts across ecosystems of varying trophic state . Heterotrophs meet some to all of their organic carbon (C) requirements from photoautotroph produced C while simultaneously competing with photoautotrophs for limiting nutrients such as phosphorous (P) . The presence of external C inputs, such as terrigenous C leaching from the watershed or C exudates derived from macrophytes , can alleviate heterotroph reliance on photoautotroph derived C and shift the heterotroph-photoautotroph relationship from commensal and competitive to strictly competitive . Therefore, increased C supply should increase the resource space available to heterotrophs and increase competition for mineral nutrients decreasing nutrients available for photoautotrophs (assuming that heterotrophs are superior competitors for limiting nutrients as has been observed ). These dynamics should result in the increase in heterotroph biomass relative to the photoautotroph biomass along a gradient of increasing labile C inputs. We refer to this differential allocation of limiting resources among components of the microbial community as niche partitioning. While these gross level dynamics have been discussed conceptually and to some extent demonstrated empirically , the effects of biomass dynamics on photoautotroph and heterotroph membership and structure has not been directly evaluated in plankton or biofilms. In addition, how changes in planktonic communities propagate to biofilms during community assembly is not well understood. We designed this study to test if C subsidies shift the biomass balance between autotrophs and heterotrophs within the biofilm or its seed pool (i.e. the plankton), and, to measure how changes in biomass pool size alter composition of the plankton and biofilm communities. Specifically, we amended marine mesocosms with varying levels of labile C input and evaluated differences in photoautotroph and heterotrophic bacterial biomass in plankton and biofilm samples along the C gradient. In each treatment we characterized plankton and biofilm community composition by PCR amplifying and DNA sequencing 16S rRNA genes and plastid 23S rRNA genes.
Lattice polymers with two competing collapse interactions
Andrea Bedini
Aleksander Owczarek

Andrea Bedini

and 2 more

June 06, 2014
We study a generalised model of self-avoiding trails, containing two different types of interaction (nearest-neighbour contacts and multiply visited sites), using computer simulations. This model contains various previously-studied models as special cases. We find that the strong collapse transition induced by multiply-visited sites is a singular point in the phase diagram and corresponds to a higher order multi-critical point separating a line of weak second-order transitions from a line of first-order transitions.
Non-cyanobacterial diazotrophs mediate dinitrogen fixation in biological soil crusts...
Charles Pepe-Ranney
Chantal Koechli

Charles Pepe-Ranney

and 3 more

May 30, 2014
ABSTRACT Biological soil crusts (BSC) are key components of ecosystem productivity in arid lands and they cover a substantial fraction of the terrestrial surface. In particular, BSC N₂-fixation contributes significantly to the nitrogen (N) budget of arid land ecosystems. In mature crusts, N₂-fixation is largely attributed to heterocystous cyanobacteria, however, early successional crusts possess few N₂-fixing cyanobacteria and this suggests that microorganisms other than cyanobacteria mediate N₂-fixation during the critical early stages of BSC development. DNA stable isotope probing (DNA-SIP) with ¹⁵N₂ revealed that _Clostridiaceae_ and _Proteobacteria_ are the most common microorganisms that assimilate ¹⁵N₂ in early successional crusts. The _Clostridiaceae_ identified are divergent from previously characterized isolates, though N₂-fixation has previously been observed in this family. The Proteobacteria identified share >98.5 %SSU rRNA gene sequence identity with isolates from genera known to possess diazotrophs (e.g. _Pseudomonas_, _Klebsiella_, _Shigella_, and _Ideonella_). The low abundance of these heterotrophic diazotrophs in BSC may explain why they have not been characterized previously. Diazotrophs play a critical role in BSC formation and characterization of these organisms represents a crucial step towards understanding how anthropogenic change will affect the formation and ecological function of BSC in arid ecosystems. KEYWORDS: microbial ecology / stable isotope probing / nitrogen fixation / biological soil crusts
Ternary Ladder Operators
Benedict Irwin

Benedict Irwin

November 02, 2020
ABSTRACT We develop a triplet operator system which encompasses the structure of quark combinations. Ladder operators are created. The constants β are currently being found.
Counting the Cost: A Report on APC-supported Open Access Publishing in a Research Lib...
Mark Newton
Eva T. Cunningham

Mark Newton

and 2 more

May 12, 2014
At one-hundred twenty-two articles published, the open access journal _Tremor and other Hyperkinetic Movements_ (tremorjournal.org, ISSN: 2160-8288), is growing its readership and expanding its influence among patients, clinicians, researchers, and the general public interested in issues of non-Parkinsonian tremor disorders. Among the characteristics that set the journal apart from similar publications, _Tremor_ is published in partnership with the library-based publications program at Columbia University’s Center for Digital Research and Scholarship (CDRS). The production of _Tremor_ in conjunction with its editor, a researching faculty member, clinician, and epidemiologist at the Columbia University Medical Center, has pioneered several new workflows at CDRS: article-charge processing, coordination of vendor services, integration into PubMed Central, administration of publication scholarships granted through a patient-advocacy organization, and open source platform development among them. Open access publishing ventures in libraries often strive for lean operations by attempting to capitalize on the scholarly impact available through the use of templated and turnkey publication systems. For CDRS, production on _Tremor_ has provided opportunity to build operational capacity for more involved publication needs. The following report introduces a framework and account of the costs of producing such a publication as a guide to library and other non-traditional publishing operations interested in gauging the necessary investments. Following a review of the literature published to date on the costs of open access publishing and of the practice of journal publishing in academic libraries, the authors present a brief history of the _Tremor_ and a tabulation of the costs and expenditure of effort by library staff in production. Although producing _Tremor_ has been more expensive than other partner publications in the center's portfolio, the experiences have improved the library's capacity for addressing more challenging projects, and developments for _Tremor_ have already begun to be applied to other journals.
Large-Scale Automated Proactive Road Safety Analysis Using Video Data
Paul St-Aubin
Nicolas Saunier

Paul St-Aubin

and 2 more

May 06, 2014
Due to the complexity and pervasiveness of transportation in daily life, the use and combination of larger data sets and data streams promises smarter roads and a better understanding of our transportation needs and environment. For this purpose, ITS systems are steadily being rolled out, providing a wealth of information, and transitionary technologies, such as computer vision applied to low-cost surveillance or consumer cameras, are already leading the way. This paper presents, in detail, a practical framework for implementation of an automated, high-resolution, video-based traffic-analysis system, particularly geared towards researchers for behavioural studies and road safety analysis, or practitioners for traffic flow model validation. This system collects large amounts of microscopic traffic flow data from ordinary traffic using CCTV and consumer-grade video cameras and provides the tools for conducting basic traffic flow analyses as well as more advanced, pro-active safety and behaviour studies. This paper demonstrates the process step-by-step, illustrated with examples, and applies the methodology to a case study of a large and detailed study of roundabouts (nearly 80,000 motor vehicles tracked up to 30 times per second driving through a roundabout). In addition to providing a rich set of behavioural data about Time-to-Collision and gap times at nearly 40 roundabout weaving zones, some data validation is performed using the standard Measure of Tracking Accuracy with results in the 85-95% range.
Agroforestry: An adaptation measure for sub-Saharan African food systems in response...
Robert Orzanna
Nika Scher

Robert Orzanna

and 1 more

April 06, 2017
This paper examines the impact of increasing weather extremes due to climate change on African food systems. The specific focus lies on agroforestry adaptation measures that can be applied by smallholder farmers to protect their livelihoods and to make their food production more resilient against the effects of those weather extremes. The adoption potentials for agroforestry is evaluated, taking into consideration regional environmental and socio-economic differences, and possible barriers for adoption with respect to extrinsic and intrinsic factors are outlined. According to the indicators that approximate extrinsic factors, a high adoption potential for agroforestry is likely to be found in Angola, Botswana, Cameroon, Cabo Verde, Gabon, Ghana, Mauritania and Senegal. A very low potential exists in Somalia, Eritrea, South Sudan and Rwanda.
Science was always meant to be open
Alberto Pepe

Alberto Pepe

February 22, 2017
Here’s my crux: I find myself criticizing over and over the way that scientific articles look today. I have said many times that scientists today write 21th-century research, using 20th-century tools, packaged in a 17th-century format. When I give talks, I often use 400-year-old-articles to demonstrate that they look and feel similar to the articles we publish today. But the scientific article of the 1600’s looked that way for a reason. This forthcoming article by explains: In the early 1600s, Galileo Galilei turned a telescope toward Jupiter. In his log book each night, he drew to-scale schematic diagrams of Jupiter and some oddly-moving points of light near it. Galileo labeled each drawing with the date. Eventually he used his observations to conclude that the Earth orbits the Sun, just as the four Galilean moons orbit Jupiter. History shows Galileo to be much more than an astronomical hero, though. His clear and careful record keeping and publication style not only let Galileo understand the Solar System, it continues to let anyone understand how Galileo did it. Galileo’s notes directly integrated his data (drawings of Jupiter and its moons), key metadata (timing of each observation, weather, telescope properties), and text (descriptions of methods, analysis, and conclusions). Critically, when Galileo included the information from those notes in Siderius Nuncius, this integration of text, data and metadata was preserved:
Two Local Volume Dwarf Galaxies Discovered in 21 cm Emission: Pisces A and B
Erik Tollerud
Awaiting Activation

Erik Tollerud

and 4 more

April 15, 2014
INTRODUCTION The properties of faint dwarf galaxies at or beyond the outer reaches of the Local Group (1 − 5 Mpc) probe the efficiency of environmentally driven galaxy formation processes and provide direct tests of cosmological predictions \citep[e.g., ][]{kl99ms, moo99ms, stri08commonmass, krav10satrev, kirby10, BKBK11, pontzen12, geha13}. However, searches for faint galaxies suffer from strong luminosity and surface brightness biases that render galaxies with LV ≲ 10⁶ L⊙ difficult to detect beyond the Local Group . Because of these biases, searching for nearby dwarf galaxies with methodologies beyond the standard optical star count methods are essential. This motivates searches for dwarf galaxies using the 21 cm emission line of neutral hydrogen (). While such searches cannot identify passive dwarf galaxies like most Local Group satellites, which lack , they have the potential to find gas-rich, potentially starforming dwarf galaxies. This is exemplified by the case of the Leo P dwarf galaxy, found first in and later confirmed via optical imaging . Here we describe two faint dwarf galaxies identified via emission in the first data release of the Galactic Arecibo L-band Feed Array (GALFA-HI) survey . As described below, they are likely within the Local Volume (<10 Mpc) but just beyond the Local Group (≳1 Mpc), so we refer to them as Pisces A and B. This paper is organized as follows: in Section [sec:data], we present the data used to identify these galaxies. In Section [sec:distance], we consider possible distance scenarios, while in Section [sec:conc] we provide context and some conclusions. Where relevant, we adopt a Hubble constant of $H_0=69.3 \; {\rm km \; s}^{-1}{\rm Mpc}^{-1}$ from WMAP9 .
Swabs to Genomes: A Comprehensive Workflow
Jenna M. Lang
Bretwood Higman

Jenna M. Lang

and 6 more

April 12, 2014
Abstract The sequencing, assembly, and basic analysis of microbial genomes, once a painstaking and expensive undertaking, has become much easier for research labs with access to standard molecular biology and computational tools. However, there are a wide variety of options available for DNA library preparation and sequencing, and inexperience with bioinformatics can pose a significant barrier to entry for many who may be interested in microbial genomics. The objective of the present study was to design, test, troubleshoot, and publish a simple, comprehensive workflow from the collection of an environmental sample (a swab) to a published microbial genome; empowering even a lab or classroom with limited resources and bioinformatics experience to perform it.
Predictions for Observing Protostellar Outflows with ALMA
Christopher
Stella Offner

Christopher

and 2 more

March 31, 2014
INTRODUCTION Young protostars are observed to launch energetic collimated bipolar mass outflows . These protostellar outflows play a fundamental role in the star formation process on a variety of scales. On sub-pc scales they entrain and unbind core gas, thus setting the efficiency at which dense gas turns into stars . Interaction between outflows and infalling material may regulate protostellar accretion and, ultimately, terminate it . On sub-pc up to cloud scales, outflows inject substantial energy into their surroundings, potentially providing a means of sustaining cloud turbulence over multiple dynamical times. The origin of outflows is attributed to the presence of magnetic fields, and a variety of different models have been proposed to explain the launching mechanism \citep[e.g.,][]{arce07}. Of these, the “disk-wind" model , in which the gas is centrifugally accelerated from the accretion disk surface, and the “X-wind" model , in which gas is accelerated along tightly wound field lines, are most commonly invoked to explain observed outflow signatures. However, investigating the launching mechanism is challenging because launching occurs on scales of a few stellar radii and during times when the protostar is heavily extincted by its natal gas. Consequently, separating outflow gas from accreting core gas, discriminating between models, and determining fundamental outflow properties are nontrivial. Three main approaches have been applied to studying outflows. First, single-dish molecular line observations have been successful in mapping the extent of outflows and their kinematics on core to cloud scales \citep[][]{bourke97,arce10,dunham14}. However, outflow gas with velocities comparable to the cloud turbulent velocity can only be extracted with additional assumptions and modeling \citep[e.g.,][]{arce01b,dunham14}, which are difficult to apply to confused, clustered star forming environments . Second, interferometry provide a means of mapping outflows down to 1,000 AU scales scales , and the Atacama Large Millimeter/submilllimeter Antenna (ALMA) is extending these limits down to sub-AU scales . However, interferometry is not suitable for producing large high-resolution maps and it resolves out larger scale structure. Consequently, it is difficult to assemble a complete and multi-scale picture of outflow properties with these observations. Finally, numerical simulations provide a complementary approach that supplies three-dimensional predictions for launching, entrainment and energy injection . The most promising avenue for understanding outflows lies at the intersection of numerical modeling and observations. By performing synthetic observations to model molecular and atomic lines, continuum, and observational effects, simulations can be mapped into the observational domain where they can be compared directly to observations \citep[e.g.,][]{Offner11,Offner12b,Mairs13}. Such direct comparisons are important for assessing the “reality" of the simulations, to interpret observational data and to assess observational uncertainties . In addition to observational instrument limitations, chemistry and radiative transfer introduce additional uncertainties that are difficult to quantify without realistic models . Synthetic observations have previously been performed in the context of understanding outflow opening angles , observed morphology , and impact on spectral energy distributions . The immanent completion of ALMA provides further motivation for predictive synthetic observations. Although ALMA will have unprecedented sensitivity and resolution compared to existing instruments, by nature interferometry resolves out large-scale structure and different configurations will be sensitive to different scales. Atmospheric noise and total observing time may also effect the fidelity of the data. Previous synthetic observations performed by suggest that the superior resolution of full ALMA and the Atacama Compact Array (ACA) will be able to resolve core structure and fragmentation prior to binary formation. predicts that ALMA will be able to resolve complex outflow velocity structure and helical structure in molecular emission. In this paper we seek to quantify the accuracy of different ALMA configurations in recovering fundamental gas properties such as mass, line-of-sight momentum, and energy. We use the casa software package to synthetically observe protostellar outflows in the radiation-hydrodynamic simulations of . By modeling the emission at different times, inclinations, molecular lines, and observing configurations we evaluate how well physical quantities can be measured in the star formation process. In section §[Methods] we describe our methods for modeling and observing outflows. In section §[results] we evaluate the effects of different observational parameters on bulk quantities. We discuss results and summarize conclusions in §[Conclusions].
Quaternion Based Metrics in Relativity
Benedict Irwin

Benedict Irwin

November 02, 2020
ABSTRACT By introducing a new form of metric tensor the same derivation for the electromanetic tensor Fμν from potentials Aμ leads to the dual space (Hodge Dual) of the regular Fμν tensor. There are additional components in the i, j, k planes, however if after the derivation only the real part is considered a physically consistent electromagnetic theory is recovered with a relabelling of $$ fields to $$ fields and vice versa.
The Microbes We Eat
Jenna M. Lang
Jonathan A. Eisen

Jenna M. Lang

and 2 more

March 04, 2014
ABSTRACT Far more attention has been paid to the microbes in our feces than the microbes in our food. Research efforts dedicated to the microbes that we eat have historically been focused on a fairly narrow range of species, namely those which cause disease and those which are thought to confer some "probiotic" health benefit. Little is known about the effects of ingested microbial communities that are present in typical American diets, and even the basic questions of which microbes, how many of them, and how much they vary from diet to diet and meal to meal, have not been answered. We characterized the microbiota of three different dietary patterns in order to estimate: the average total amount of daily microbes ingested via food and beverages, and their composition in three daily meal plans representing three different dietary patterns. The three dietary patterns analyzed were: 1) the Average American (AMERICAN): focused on convenience foods, 2) USDA recommended (USDA): emphasizing fruits and vegetables, lean meat, dairy, and whole grains, and 3) Vegan (VEGAN): excluding all animal products. Meals were prepared in a home kitchen or purchased at restaurants and blended, followed by microbial analysis including aerobic, anaerobic, yeast and mold plate counts as well as 16S rRNA PCR survey analysis. Based on plate counts, the USDA meal plan had the highest total amount of microbes at \(1.3 X 10^9\) CFU per day, followed by the VEGAN meal plan and the AMERICAN meal plan at \(6 X 10^6 \)and \(1.4 X 10^6\) CFU per day respectively. There was no significant difference in diversity among the three dietary patterns. Individual meals clustered based on taxonomic composition independent of dietary pattern. For example, meals that were abundant in Lactic Acid Bacteria were from all three dietary patterns. Some taxonomic groups were correlated with the nutritional content of the meals. Predictive metagenome analysis using PICRUSt indicated differences in some functional KEGG categories across the three dietary patterns and for meals clustered based on whether they were raw or cooked. Further studies are needed to determine the impact of ingested microbes on the intestinal microbiota, the extent of variation across foods, meals and diets, and the extent to which dietary microbes may impact human health. The answers to these questions will reveal whether dietary microbial approaches beyond probiotics taken as supplements - _i.e._, ingested as foods - are important contributors to the composition, inter-individual variation, and function of our gut microbiota.
PRECISION ASTEROSEISMOLOGY OF THE WHITE DWARF GD 1212 USING A TWO-WHEEL CONTROLLED KE...
JJ Hermes
Awaiting Activation

JJ Hermes

and 10 more

February 20, 2014
We present a preliminary analysis of the cool pulsating white dwarf GD1212, enabled by more than 11.5 days of space-based photometry obtained during an engineering test of a two-reaction wheel controlled _Kepler_ spacecraft. We detect at least 21 independent pulsation modes, ranging from 369.8 − 1220.8s, and at least 17 nonlinear combination frequencies of those independent pulsations. Our longest uninterrupted light curve, 9.0 days in length, evidences coherent difference frequencies at periods inaccessible from the ground, up to 14.5hr, the longest-period signals ever detected in a pulsating white dwarf. These results mark some of the first science to come from a two-wheel controlled _Kepler spacecraft_, proving the capability for unprecedented discoveries afforded by extending _Kepler_ observations to the ecliptic.
← Previous 1 2 … 2002 2003 2004 2005 2006 2007 2008 2009 2010 Next →
ESS Open Archive

| Powered by Authorea.com

instution-link instution-link instution-link instution-link instution-link instution-link
  • Home
  • About Us
  • Advisory Board
  • Editorial Board
  • Submission Guide
  • FAQs