Machine Learning, Crowdsourcing Speed Up Online Surveys – Multichannel Merchant

machine learning, crowdsourcing speed up online surveys – multichannel merchant


Online surveys have grown in popularity because of the ease with which they give organizations valuable insights into everything from product design and packaging to consumer buying habits. But today’s research platforms often impose a tradeoff between speed and simplicity and the richness of actionable insights.
A combination of machine learning technology and crowdsourcing concepts is solving this problem.  It enables researchers to shorten online survey time without having to resort to matrix tables that often make surveys uncomfortably long and can skew results. At the same time, these technologies deliver the higher accuracy, deeper insights and superior user experience of open-ended questions.
Matrix Table Challenges
Researchers have typically accelerated online surveys by asking questions not one by one but in a more space-efficient matrix format (questions are in the table’s rows, response scale options in its columns). Our study, however, shows this can skew answers to the midrange, as shown in Figure 1. It also may encourage “straight-lining” (i.e., selecting the same response for all rows of the matrix) and even prime respondents to answer in a certain way.

A second study we conducted tested whether this observed midpoint drift could be replicated in a different set …



Convolutional Neural Networks May Allow Mapping Of Every Tree On Earth

convolutional neural networks may allow mapping of every tree on earth



Published on October 20th, 2020 |
by Carolyn Fortuna

October 20th, 2020 by Carolyn Fortuna 

What can you do with a massive database of high-resolution satellite images covering more than 1.3 million square kilometers of the western Sahara and Sahel regions of West Africa? Well, a group of researchers has used artificial intelligence (AI) neural networks to map the location and size of more than 1.8 billion individual tree canopies.
The results?
It may soon be possible to map the location and size of every tree worldwide. This step forward in observational capabilities is important, as it can alter the way we consider, monitor, plan, and manage global terrestrial ecosystems.

Image retrieved from NASA Visualization Studio

Writing in Nature, Martin Brandt and team analyzed more than 11,000 images at a spatial resolution of 0.5 meters. Their goal was to identify individual trees and shrubs with canopy diameters of 2 meters or more. Never before have trees been mapped at this level of detail across such a large area (although it should be noted that this method required an input of approximately 90,000 manually digitized training points, which may be untenable for some replication studies).
The team completed this huge task using AI, exploiting a computational approach that involves …


READ MORE FROM SOURCE ARTICLE Continue reading “Convolutional Neural Networks May Allow Mapping Of Every Tree On Earth”

Visual Artificial Intelligence on the Edge of Revolutionizing Retail

visual artificial intelligence on the edge of revolutionizing retail


“We are laser-focused on continuous improvements to customers’ experience across our stores. By leveraging Everseen’s Visual AI and machine-learning technology, we’re not only able to remove friction for the customer, but we can also remove controllable costs from the business and redirect those resources to improving the customer experience even more.” – Mike Lamb, LPC, Kroger’s VP of Asset Protection
This post was inspired by a recent Kroger article announcing the deployment of visual artificial intelligence (AI) in 2,500 stores and new IHL Group edge computing research. Multiple technological trends have been converging for some time, and their combination is leading to transformative store operations improving solutions.
By 2021, one billion video cameras will be deployed around the world. Endless possibilities in creating immersive consumer experiences emerge when artificial intelligence and machine learning are coupled with these visual data gathering devices.

– Sponsor –

COVID-19 has become a disruptive accelerator of digital transformation trends that were already underway. It takes 66 days or approximately two months to form a new permanent habit. New shopping journey habits have emerged during the pandemic that will require intensified analysis of millions of data inputs to both protect transactions and remove negative experience friction.
What …


READ MORE FROM SOURCE ARTICLE Continue reading “Visual Artificial Intelligence on the Edge of Revolutionizing Retail”

Artificial Intelligence improves clinical trials | Lexology

artificial intelligence improves clinical trials | lexology


In case anyone missed it: attention on AI’s application to healthcare is apparently at ‘peak hype’. With the volume of healthcare data doubling every 2 to 5 years, it is no surprise that many are using AI to make sense of such vast amounts of data, and development of medical AI technologies is progressing rapidly. At the same time, the COVID-19 pandemic has exposed vulnerabilities in healthcare systems around the world, highlighting the need for technological interventions in healthcare. In line with these trends, the healthcare AI market is expected to grow from US$2 billion in 2018 to US$36 billion by 2025.
The breadth of AI’s application in healthcare is impressive, ranging from diagnostic chat bots to AI robot-assisted surgery. Other examples include AI enhanced microscopes that can more efficiently scan for harmful bacteria in blood samples; efficient and enhanced scanning for abnormalities in radiographic images; and AI algorithm analysis of tone, language and facial expressions to detect mental illness.
But as exciting as the prospects of these AI uses are, exaggerated and unsupported claims about AI’s capabilities in healthcare (such as its superiority over clinicians) threaten to undermine public trust in AI. This is especially important in healthcare, where patients …


READ MORE FROM SOURCE ARTICLE Continue reading “Artificial Intelligence improves clinical trials | Lexology”

Machine Learning Helped Predict Short-Term Cancer Mortality – Cancer Therapy Advisor

machine learning helped predict short-term cancer mortality – cancer therapy advisor


Researchers have validated a machine-learning algorithm that was integrated into an electronic health record to generate real-time, accurate predictions of the short-term mortality risk for patients with cancer, according to a recent study.

Additionally, this machine-learning algorithm outperformed other prognostic indices.

“Such an automated tool may complement clinician intuition and lead to improved targeting of supportive care interventions for high-risk patients with cancer,” the researchers wrote.

The prospective study included 24,582 patients with outpatient oncology encounters from March 2019 to April 2019. Encounters occurred at 1 tertiary and 17 general oncology practices. The machine learning algorithm predicted 180-day risk between 4 and 8 days prior to the scheduled encounter.Continue Reading

The area under the curve was 0.89 (95% CI, 0.88-0.90) for the entire cohort, and it ranged across disease-specific groups, the study found. For example, AUC ranged from 0.74 for neuro-oncology to 0.96 for breast oncology. There was no difference found between the tertiary center and the general oncology practices.

The researchers used a prespecified 40% mortality risk as a threshold to differentiate high- vs low-risk patients. At this threshold, the observed 180-day mortality was 45.2% for the high-risk patients compared with 3.1% for the low-risk patients.

Finally, the study integrated the machine-learning algorithm with ECOG and Elixhauser comorbidity index-based classifiers, which resulted …


READ MORE FROM SOURCE ARTICLE Continue reading “Machine Learning Helped Predict Short-Term Cancer Mortality – Cancer Therapy Advisor”

Experts: Artificial intelligence provides students more individualized teaching

experts: artificial intelligence provides students more individualized teaching


Artificial intelligence makes teaching more efficient. Credit: Aino Huovio

There is constant discussion of using artificial intelligence and learning analytics to support teaching. New digital methods, platforms and tools are being introduced more and more, and the opportunities created by the development of artificial intelligence are to be harnessed to enhance teaching and provide students with increasingly individualized teaching. Jiri Lallimo (Project Manager, Teacher Services), Ville Kivimäki (Expert, Dean’s Unit, School of Engineering), Thomas Bergström (Expert, IT Services) and Juha Martikainen (Systems Specialist, IT Services) from Aalto University have been studying the issue.

The key is to listen to the end users
The use of artificial intelligence in teaching and learning is still at a fairly early stage, but technology is constantly evolving, and new opportunities are being discovered. The key is to remember that services are developed for students and teachers and should be kept at the center of all development work. It is pointless to develop services and functions that do not serve end users as desired.
“Artificial intelligence and learning analytics are based on data collection, and it is important to look at it from a useful perspective, to look at how better and more …


READ MORE FROM SOURCE ARTICLE Continue reading “Experts: Artificial intelligence provides students more individualized teaching”

Deep-Learning Powered Reinsurance Company Kettle Launches to Protect People From the Exacerbated Effects of Climate Change, Starting with Wildfires

deep-learning powered reinsurance company kettle launches to protect people from the exacerbated effects of climate change, starting with wildfires


Kettle is targeting the $300 billion-a-year reinsurance industry, starting with California wildfires. “Reinsurance” is an additional layer of insurance for insurance carriers that covers catastrophic events such as hurricanes or wildfires. The industry has seen a 68 percent drop in return on equity due to a 3X increase in catastrophes causing more than $1 billion in damage over the past 15 years. According to the National Oceanic and Atmospheric Administration (NOAA), 2019 was the fifth consecutive year in which 10 separate $1 billion catastrophes hit the United States.

Founded by Andrew Engler and Nathaniel Manning, Kettle is structured as a reinsurance company that can underwrite these increasing risks. Engler has more than a decade of experience working in the insurance and reinsurance industries, most recently as the vice president of digital at the public reinsurer, Argo Group. Manning spent years working with data for humanitarian effotts as the CEO of Ushahidi, the largest open source software platform for community crisis response, and as the first chief data officer of USAID.
“I’ve spent years building software to enable people to get help in the aftermath of a crisis,” said Manning. “But I noticed it was the insurance companies who were providing the financial safety nets to help these …


READ MORE FROM SOURCE ARTICLE Continue reading “Deep-Learning Powered Reinsurance Company Kettle Launches to Protect People From the Exacerbated Effects of Climate Change, Starting with Wildfires”

Standard CT Technology Produces Spectral Images with Deep Learning

standard ct technology produces spectral images with deep learning


October 20, 2020 — Bioimaging technologies are the eyes that allow doctors to see inside the body in order to diagnose, treat, and monitor disease. Ge Wang, an endowed professor of biomedical engineering at Rensselaer Polytechnic Institute, has received significant recognition for devoting his research to coupling those imaging technologies with artificial intelligence in order to improve physicians’ “vision.”In research published in Patterns, a team of engineers led by Wang demonstrated how a deep learning algorithm can be applied to a conventional computerized tomography (CT) scan in order to produce images that would typically require a higher level of imaging technology known as dual-energy CT.
Wenxiang Cong, a research scientist at Rensselaer, is first author on this paper. Wang and Cong were also joined by coauthors from Shanghai First-Imaging Tech, and researchers from GE Research.
“We hope that this technique will help extract more information from a regular single-spectrum X-ray CT scan, make it more quantitative, and improve diagnosis,” said Wang, who is also the director of the Biomedical Imaging Center within the Center for Biotechnology and Interdisciplinary Studies (CBIS) at Rensselaer.
Conventional CT scans produce images that show the shape of tissues within the body, but they don’t give doctors sufficient …


READ MORE FROM SOURCE ARTICLE Continue reading “Standard CT Technology Produces Spectral Images with Deep Learning”

Baryonic Physics with Deep Learning

baryonic physics with deep learning


Title:  Learning effective physical laws for generating cosmological hydrodynamics with Lagrangian Deep LearningAuthors:  Biwei Dai and Uros SeljakFirst Author’s Institution:  Berkeley Center for Cosmological Physics, UC Berkeley, CAStatus:  Submitted to arXiv (open access)
Cosmology and high performance computing often go hand-in-hand. Modelling the large-scale, filamentary structure of the Universe – and comparing it with observations – requires highly optimised code and power-hungry supercomputers. A key issue lies with running hydrodynamical simulations at a high enough resolution to model galaxy formation, within a volume large enough to encompass the next generation of sky surveys. One way to cut down on the complexity is to run a strictly dark matter-only simulation, which ignores the expensive baryons and instead adds them via analytic post-processing. Yet such methods cannot properly account for phenomena reliant on gas properties, e.g. the Sunyaev-Zel’dovich effect.The authors of today’s paper introduce a new deep learning method, Lagrangian Deep Learning (LDL), with which to learn and model the physics governing baryonic hydrodynamics in cosmological simulations. By combining a quasi N-body gravity solver with their LDL model, the authors were able to generate stellar maps from the linear density field with worst-case computational costs at an impressive 4 orders …


READ MORE FROM SOURCE ARTICLE Continue reading “Baryonic Physics with Deep Learning”