Iridescent is now in its 12th year, and entering a different stage in operation. Up until now we have been hard at work scaling our programs worldwide. This past year however, we focused our efforts on building robust, responsive systems on three fronts: 1) training and supporting partners 2) collecting and analyzing impact data and 3) sharing formative evaluation results with partners quickly enough to improve the next round of implementations. We provided this support for ~35,000 participants from 7,000+ organizations this year, but have room to improve moving forward.
In 2018, we are committed to bringing a higher level of transparency in impact reporting. This year, in partnership with the Center for Research on Lifelong STEM learning, we were able to quantify the type of learning gains based on the number of hours of participation in our programs. This provides us a very important baseline, and based on the findings of this partnership, we have decided to only count dosage of 6 or more hours towards our impact numbers. It is easy enough to run one-off programs, but we learned that behavioral changes take place after 6 or more hours of our programming. As an organization we commitment to fully focusing on improving participant retention and quality of learning. We also commit to collecting, analyzing, and sharing learning gains and program impact data – highlighting retention and attrition at each stage – on a quarterly basis. Read on for more information about our 2017 impact and goals, and our goals for the year ahead.
This year, we were able to measure, track and reach five of our goals as set out in our strategic plan – scale (reach 35,000 participants), cost efficiency ($10/contact hour), learning gains, brand strengthening, and financial goals. In the past, we primarily tracked only the number of participants reached. We did also do impact evaluations, but the data collected wasn’t analyzed regularly.
Data Collection and Analysis: Learning Gains
It was only this year that we were able to design and set up data collection systems to accurately track participant attendance during our programs in order to connect it to impact. Once we put a measurement system in place, we were able to begin thinking about increasing dosage. This may sound trivial, but it requires building many layers of infrastructure to create a scalable, sustainable, cost-efficient assessment system that can feed back into the system and improve it. For instance, we need to integrate data collection into program operations as smoothly as possible (such as analyzing clickstream data) so more participants are interested in answering assessment questions. We also recognize that partners need to understand the importance of collecting the data so they invest more time in collecting it. The best way to achieve this buy-in is to share prior program analyses, and help partners raise more funds for program operation by leveraging the impact data.
All these data-collection systems take time to build, to integrate into program operations, and to deploy at each and every implementation with high fidelity.
What takes even more effort is to create systems to track alumni data. What we found was that it is cost prohibitive to try to track down participants from underserved communities whose phone numbers have long since changed. What does work though is thinking about a “pull” mechanism, such as a resource-rich online community that provides many reasons for past participants to stay connected to us – access to new learning opportunities, connections to mentors, access to internships and even job opportunities. Thus large scale, cost-effective assessment can only be done if it is seamlessly integrated into program operations.
The first step therefore is to improve the program structure and better support the participant community to motivate participants to return as alumni and mentors. The second step is to instrument interactions to collect impact data, analyze and disseminate.
So we are very excited to share that 2017 was the first year we were able to systematize learning gain data collection, analysis and reporting. With this body of work we are able to add more facets to our two longitudinal studies that have already shown the long-term impact of Curiosity Machine and Technovation on participants.
The 2017 internal evaluations of 19 Curiosity Machine implementation data sets (n= 769, paired = 126) helped us get more fine-grained on the causes of impact, connecting dosage with the type of learning gains. We found significant increases in creativity and persistence in students after 6 contact hours. In addition to our internal evaluation, the Center for Research on Lifelong STEM Learning found significant increases in students in:
- constructive coping & resilience after 8 hours
- interest in future STEM engagement/career after 6 hours
- understanding the purpose & relevance of science after 6 hours.
Some of the other reports analyze actual student projects, impact of our programs on mentors and the variation of impact of Technovation on girls in different countries.
Read more of our evaluation impact reports on our Impact Page.
Goals for 2018
In 2018 we will work towards measuring and improving participant retention in our programs. Since setting our 5-year strategic plan goals, we spent the first few years focused on achieving scale, learning gains, cost-efficiency and financial stability goals. For instance, in 2015 we reached ~8,000 participants and this year we will be engaging ~30,000 participants (an almost 4x increase).
Moving forward we will shift our focus to retention, while continuing to increase scale and depth of impact. This will be accomplished by sharing best practices across Technovation and Curiosity Machine, and adapting the Curiosity Machine program to rise to a growing need in the world of engineering and technology.
Curiosity Machine AI Family Challenge
February 2018 will mark the launch of our new program the Curiosity Machine AI Family Challenge.
In partnership with The Association of Advancement of Artificial Intelligence (AAAI) we are launching the first global Curiosity Machine AI Family Challenge – a two-stage competition for 20,000 underserved 3rd-8th grade students and parents (especially mothers) to use AI technologies and tools (sensors, data analysis tools) to solve problems in their communities, (along the tracks of health, energy, food, transportation, education, public safety and civic engagement).
Our main goal through this initiative is to build on our two-generation learning approach, and empower parents and children to develop a learner mindset that will give them a lifelong ability to innovate and problem solve.
In addition to AAAI, we will be working with ~300 community organizations that will directly engage underserved populations. The total project budget is $3.2 million, including $1.6 million that will go to community partners to support program launch, materials, educator training, and celebration events.
We will begin the program implementation in February 2018, training community partners and engaging families. Families will go through an 18-week (50-hour) curriculum spanning three stages: Foundations of AI (spring), Summer Robotics (summer) and AI in your Community (fall). The competition will culminate in the Final Showcase event in Spring 2019.
The graph below incorporates the projected number of participants we aim to reach through Technovation (14,750) and ~40,000 participants through the Curiosity Machine AI Family Challenge (dependent on our ability to raise appropriate levels of funding).
Curiosity Machine Learning Gains goals
Based on the baseline we established in 2017, we have set the following goals:
- 60% of students sustain self efficacy as a STEM learner
- 60% of students intend to continue STEM learning
- 50% of parents increase self efficacy as a STEM learner
- 75% of parents indicate improved understanding of technology and engineering
- 75% of educators indicate greater confidence in leading hands-on STEM learning experiences
Technovation Learning Gains goals
- 70% of Technovation girls demonstrate increased knowledge of coding and computational thinking skills
- 80% of Technovation girls are interested in learning more about CS and entrepreneurship
- 80% of Technovation girls demonstrate increased knowledge of engineering design process and problem solving skills
- 90% of Technovation girls are more confident in using technology
- 60% of Technovation Student Ambassadors develop their leadership and sense of self efficacy
- 60% of mentors experience gains in technical communication, product development skills, mobile app development, lean design, project management and leadership
- 20% of parents have better understanding of the strategies they can use to better support their daughters to pursue CS and entrepreneurship careers
- 30% of parents understand the value of technology education for girls
- Curiosity Machine aims for 60% retention of participants across a 5-week in-person program and 50% of participants completing 3 design challenges.
- Historically ~50% of girls who registered for Technovation have gone on to finish the full 100-hour curriculum. Our goal for the 2018 season is to increase in-season retention to 70% and year-to-year retention to 45% (from 17%).
Program quality and partner management (both Technovation and Curiosity Machine)
- 80% mentors and teachers are satisfied with training and support
- 80% goal for year-to-year Partner retention
2017 was a momentous year for Iridescent in many ways, especially because it marked a new phase of maturity for us. Our DNA is still one of a fast-moving, agile and innovative small team, but we are putting systems in place that will enable us to find more answers, in more efficient ways, to the larger question – “What does it take for thousands of underserved children worldwide to build a sense of self-efficacy as innovators, leaders and entrepreneurs”?
We learn more with each day, and hope to continue doing that and sharing that with you in 2018!