Interaction of a Buoyant Plume with a Turbulent Canopy Mixing Layer
This study aims to understand the impact of instabilities and turbulence arising from canopy mixing layers on wind-driven wildfire spread. Using an experimental flume (water) setup with model vegetation canopy and thermally buoyant plumes, we study the influence of canopy-induced shear and turbulence on the behavior of buoyant plume trajectories. Using the length of the canopy upstream of the plume source to vary the strength of the canopy turbulence, we observed behaviors of the plume trajectory under varying turbulence yet constant cross-flow conditions. Results indicate that increasing canopy turbulence corresponds to increased strength of vertical oscillatory motion and variability in the plume trajectory/position. Furthermore, we find that the canopy coherent structures characterized at the plume source set the intensity and frequency at which the plume oscillates. These perturbations then move longitudinally along the length of the plume at the speed of the free stream velocity. However, the buoyancy developed by the plume can resist this impact of the canopy structures. Due to these competing effects, the oscillatory behavior of plumes in canopy systems is observed more significantly in systems where the canopy turbulence is dominant. These effects also have an influence on the mixing and entrainment of the plumes. We offer scaling analyses to find flow regimes in which canopy induced turbulence would be relevant in plume dynamics.
Related Publications
Current societal trends reflect an increased mistrust in science and a lowered civic engagement that threaten to impair research that is foundational for ensuring public health and advancing health equity. One effective countermeasure to these trends lies in community-facing citizen science applications to increase public participation in scientific research, making this field an important target for artificial intelligence (AI) exploration. We highlight potentially promising citizen science AI applications that extend beyond individual use to the community level, including conversational large language models, text-to-image generative AI tools, descriptive analytics for analyzing integrated macro- and micro-level data, and predictive analytics. The novel adaptations of AI technologies for community-engaged participatory research also bring an array of potential risks. We highlight possible negative externalities and mitigations for some of the potential ethical and societal challenges in this field.
Current societal trends reflect an increased mistrust in science and a lowered civic engagement that threaten to impair research that is foundational for ensuring public health and advancing health equity. One effective countermeasure to these trends lies in community-facing citizen science applications to increase public participation in scientific research, making this field an important target for artificial intelligence (AI) exploration. We highlight potentially promising citizen science AI applications that extend beyond individual use to the community level, including conversational large language models, text-to-image generative AI tools, descriptive analytics for analyzing integrated macro- and micro-level data, and predictive analytics. The novel adaptations of AI technologies for community-engaged participatory research also bring an array of potential risks. We highlight possible negative externalities and mitigations for some of the potential ethical and societal challenges in this field.

Measuring the impact of online misinformation is challenging. Traditional measures, such as user views or shares on social media, are incomplete because not everyone who is exposed to misinformation is equally likely to believe it. To address this issue, we developed a method that combines survey data with observational Twitter data to probabilistically estimate the number of users both exposed to and likely to believe a specific news story. As a proof of concept, we applied this method to 139 viral news articles and find that although false news reaches an audience with diverse political views, users who are both exposed and receptive to believing false news tend to have more extreme ideologies. These receptive users are also more likely to encounter misinformation earlier than those who are unlikely to believe it. This mismatch between overall user exposure and receptive user exposure underscores the limitation of relying solely on exposure or interaction data to measure the impact of misinformation, as well as the challenge of implementing effective interventions. To demonstrate how our approach can address this challenge, we then conducted data-driven simulations of common interventions used by social media platforms. We find that these interventions are only modestly effective at reducing exposure among users likely to believe misinformation, and their effectiveness quickly diminishes unless implemented soon after misinformation’s initial spread. Our paper provides a more precise estimate of misinformation’s impact by focusing on the exposure of users likely to believe it, offering insights for effective mitigation strategies on social media.

Measuring the impact of online misinformation is challenging. Traditional measures, such as user views or shares on social media, are incomplete because not everyone who is exposed to misinformation is equally likely to believe it. To address this issue, we developed a method that combines survey data with observational Twitter data to probabilistically estimate the number of users both exposed to and likely to believe a specific news story. As a proof of concept, we applied this method to 139 viral news articles and find that although false news reaches an audience with diverse political views, users who are both exposed and receptive to believing false news tend to have more extreme ideologies. These receptive users are also more likely to encounter misinformation earlier than those who are unlikely to believe it. This mismatch between overall user exposure and receptive user exposure underscores the limitation of relying solely on exposure or interaction data to measure the impact of misinformation, as well as the challenge of implementing effective interventions. To demonstrate how our approach can address this challenge, we then conducted data-driven simulations of common interventions used by social media platforms. We find that these interventions are only modestly effective at reducing exposure among users likely to believe misinformation, and their effectiveness quickly diminishes unless implemented soon after misinformation’s initial spread. Our paper provides a more precise estimate of misinformation’s impact by focusing on the exposure of users likely to believe it, offering insights for effective mitigation strategies on social media.
Social media platforms are too often understood as monoliths with clear priorities. Instead, we analyze them as complex organizations torn between starkly different justifications of their missions. Focusing on the case of Meta, we inductively analyze the company’s public materials and identify three evaluative logics that shape the platform’s decisions: an engagement logic, a public debate logic, and a wellbeing logic. There are clear trade-offs between these logics, which often result in internal conflicts between teams and departments in charge of these different priorities. We examine recent examples showing how Meta rotates between logics in its decision-making, though the goal of engagement dominates in internal negotiations. We outline how this framework can be applied to other social media platforms such as TikTok, Reddit, and X. We discuss the ramifications of our findings for the study of online harms, exclusion, and extraction.
Social media platforms are too often understood as monoliths with clear priorities. Instead, we analyze them as complex organizations torn between starkly different justifications of their missions. Focusing on the case of Meta, we inductively analyze the company’s public materials and identify three evaluative logics that shape the platform’s decisions: an engagement logic, a public debate logic, and a wellbeing logic. There are clear trade-offs between these logics, which often result in internal conflicts between teams and departments in charge of these different priorities. We examine recent examples showing how Meta rotates between logics in its decision-making, though the goal of engagement dominates in internal negotiations. We outline how this framework can be applied to other social media platforms such as TikTok, Reddit, and X. We discuss the ramifications of our findings for the study of online harms, exclusion, and extraction.
Few young people with type 1 diabetes (T1D) meet glucose targets. Continuous glucose monitoring improves glycemia, but access is not equitable. We prospectively assessed the impact of a systematic and equitable digital-health-team-based care program implementing tighter glucose targets (HbA1c < 7%), early technology use (continuous glucose monitoring starts <1 month after diagnosis) and remote patient monitoring on glycemia in young people with newly diagnosed T1D enrolled in the Teamwork, Targets, Technology, and Tight Control (4T Study 1). Primary outcome was HbA1c change from 4 to 12 months after diagnosis; the secondary outcome was achieving the HbA1c targets. The 4T Study 1 cohort (36.8% Hispanic and 35.3% publicly insured) had a mean HbA1c of 6.58%, 64% with HbA1c < 7% and mean time in the range (70-180 mg dl-1) of 68% at 1 year after diagnosis. Clinical implementation of the 4T Study 1 met the prespecified primary outcome and improved glycemia without unexpected serious adverse events. The strategies in the 4T Study 1 can be used to implement systematic and equitable care for individuals with T1D and translate to care for other chronic diseases.
Few young people with type 1 diabetes (T1D) meet glucose targets. Continuous glucose monitoring improves glycemia, but access is not equitable. We prospectively assessed the impact of a systematic and equitable digital-health-team-based care program implementing tighter glucose targets (HbA1c < 7%), early technology use (continuous glucose monitoring starts <1 month after diagnosis) and remote patient monitoring on glycemia in young people with newly diagnosed T1D enrolled in the Teamwork, Targets, Technology, and Tight Control (4T Study 1). Primary outcome was HbA1c change from 4 to 12 months after diagnosis; the secondary outcome was achieving the HbA1c targets. The 4T Study 1 cohort (36.8% Hispanic and 35.3% publicly insured) had a mean HbA1c of 6.58%, 64% with HbA1c < 7% and mean time in the range (70-180 mg dl-1) of 68% at 1 year after diagnosis. Clinical implementation of the 4T Study 1 met the prespecified primary outcome and improved glycemia without unexpected serious adverse events. The strategies in the 4T Study 1 can be used to implement systematic and equitable care for individuals with T1D and translate to care for other chronic diseases.