Iridescent is now in its 12th year, and entering a different stage in operation. Since 2006, we have been scaling our programs worldwide. This past year, we focused our efforts on building robust, responsive systems on three fronts: 1) training and supporting partners 2) collecting and analyzing impact data and 3) sharing formative evaluation results with partners quickly enough to improve the next round of implementations. We provided this support for ~35,000 participants from 7,000+ organizations this year, and are excited to improve quality and scale moving forward.

In 2018, we  are committed to bringing a higher level of transparency in impact reporting. In partnership with the Center for Research on Lifelong STEM learning, we were able to quantify the type of learning gains based on the number of hours of participation in our programs. These findings have provided a very important baseline and informed our next strategic decision to only count dosage of 6 or more hours towards our impact numbers. We could easily implement one-off programs with  low impact, but we learned that behavioral changes take place after 6 or more hours of our programming. As an organization we are committed to focusing on improving participant retention and quality of learning. We will collect, analyze, and share learning gains and program impact data – highlighting retention and attrition at each stage – on a quarterly basis. Read on for more information about what we’ve accomplished in 2017, and our goals for the year ahead.

2017 Review

This year, we tracked (and reached!) five of our goals as set out in our 2015-2019 strategic plan – scale (reach 35,000 participants), cost efficiency ($10/contact hour), learning gains, brand strengthening, and financial goals. In the past, we primarily tracked only the number of participants reached. Although we had previously completed impact evaluations, the data was not regularly analyzed. 

Data Collection and Analysis: Learning Gains

It was only this year that we were able to design and implement data collection systems to accurately track participant attendance during our programs, and correlate that to impact. Once we put a measurement system in place, we were able to begin thinking about increasing dosage. This required building many layers of infrastructure to create a scalable, cost-efficient assessment system that provides feedback on ways to improve itself. For instance, we integrate data collection into program operations be analyzing clickstream data in order to increase participants’ interest in answering assessment questions. Partners invest more time in collecting data when they understand the significant impact accomplished by implementing our programs and how to provide the best learning experience for all participants. The best way to achieve this buy-in is by sharing prior program analyses, and helping partners raise more funds for program operation by leveraging impact data.

Data collection systems take time to build, integrate, and  deploy at each and every implementation with high fidelity.

What takes even more effort is creating systems to track alumni data. We have found that it is cost prohibitive to track down participants from underserved communities whose phone numbers have long since changed. A potential alternative method to track this data could be a “pull” mechanism. For example, a resource-rich online community that provides many reasons for past participants to stay connected to us – access to new learning opportunities, connections to mentors, internships and even job opportunities. Large scale, cost-effective assessment can only be done if it is seamlessly integrated into program operations.

The first step is to improve the program structure and better support the participant community to motivate participants to return as alumni and mentors. The second step is to create systems to collect impact data, analyze and disseminate.

So we are very excited to share that 2017 was the first year we were able to systematize learning gain data collection, analysis and reporting. With this body of work, we are able to add more facets to our two longitudinal studies that have already shown the long-term impact of Curiosity Machine and Technovation on participants.

The 2017 internal evaluations of 19 Curiosity Machine implementation data sets (n= 769, paired = 126) helped us refine causes of impact, connecting dosage with the type of learning gains. We found significant increases in creativity and persistence in students after 6 hours in our programming. In addition to our internal evaluation, the Center for Research on Lifelong STEM Learning found significant increases the following traits for students:

  • constructive coping & resilience after 8 hours
  • interest in future STEM engagement/career after 6 hours
  • understanding the purpose & relevance of science after 6 hours.

Other reports include an analysis actual student projects, impact of our programs on mentors and the variation of Technovation’s impact on girls in different countries.

Read more of our evaluation impact reports on our Impact Reports Page.

Goals for 2018

In 2018 we will focus on improving participant retention in our programs through comprehensive data collection. Since setting our 5-year strategic plan goals, we spent the first few years working on achieving scale, learning gains, cost-efficiency and financial stability goals. For instance, in 2015 we reached ~8,000 participants and this year we will be engaging ~30,000 participants (an almost 4x increase).

We will shift our focus to retention, while continuing to increase scale and depth of impact. This will be accomplished by sharing best practices across Technovation and Curiosity Machine, creating mutual reinforcement between our programs. We are especially interested in focusing on developing the Curiosity Machine model, since it embodies a cross-cultural, intergenerational approach that uses the collective impact ideology to bring together a community of individuals supporting student learning – a key part of Iridescent’s mission. 

Curiosity Machine AI Family Challenge

February 2018 will mark the launch of our new program the Curiosity Machine AI Family Challenge.

In partnership with The Association of Advancement of Artificial Intelligence (AAAI) we are launching the first global Curiosity Machine AI Family Challenge – a three-stage competition for 20,000 underserved 3rd-8th grade students and parents (especially mothers) to use AI technologies and tools to solve problems in their communities along the tracks of health, energy, food, transportation, education, public safety and civic engagement.

Our main goal through is to build on our two-generation learning approach, by empowering parents and children to develop a learner mindset that will give them a lifelong ability to innovate and solve problems.

In addition to AAAI, we will be working with ~300 community organizations that will directly engage underserved populations. The total project budget is $3.2 million, including $1.6 million that will go to community partners to support program launch, materials, educator training, and celebration events.

We will begin the program implementation in February 2018, training community partners and engaging families. Families will go through an 18-week (50-hour) curriculum spanning three stages: Foundations of AI (Spring), Exploring Robotics (Summer) and AI for the Community (Fall). The competition will culminate in the Final Showcase event in Spring 2019.

Program goals

Curiosity Machine Learning Gains goals

Based on the baseline we established in 2017, we have set the following goals:

  • 60% of students sustain self-efficacy as a STEM learner
  • 60% of students intend to continue STEM learning
  • 50% of parents increase self-efficacy as a STEM learner
  • 75% of parents indicate improved understanding of technology and engineering
  • 75% of educators indicate greater confidence in leading hands-on STEM learning experiences

Technovation Learning Gains goals

  • 70% of Technovation girls demonstrate increased knowledge of coding and computational thinking skills
  • 80% of Technovation girls are interested in learning more about CS and entrepreneurship
  • 80% of Technovation girls demonstrate increased knowledge of engineering design process and problem solving skills
  • 90% of Technovation girls are more confident in using technology
  • 60% of Technovation Student Ambassadors develop their leadership and sense of self efficacy
  • 60% of mentors experience gains in technical communication, product development skills, mobile app development, lean design, project management and leadership
  • 20% of parents have improved understanding of strategies they can use to better support their daughters in pursuing CS and entrepreneurship careers
  • 30% of parents understand the value of technology education for girls

Participant Retention

  • Curiosity Machine aims for 60% retention of participants through the first stage (a 5-week in-person program) and 50% of participants completing the second stage (with 3 design challenges built at home).
  • Historically, ~50% of girls who registered for Technovation have gone on to finish the full 100-hour curriculum. Our goal for the 2018 season is to increase in-season retention to 70% and year-to-year retention to 45% (from 17%).

Program quality and partner management (both Technovation and Curiosity Machine)

  • 80% of mentors and teachers are satisfied with training and support
  • 80% goal for year-to-year partner retention

Summary

2017 was a momentous year for Iridescent in many ways, especially because it marked a new phase of maturity for us. Our DNA is still one of a fast-moving, agile and innovative small team, but we are putting systems in place that will enable us to efficiently find answers to the larger question – “What does it take for thousands of underserved children worldwide to build a sense of self-efficacy as innovators, leaders and entrepreneurs”?

We learn more with each day, and hope to continue doing that and sharing our progress with you in 2018!