Welcome Team Green! This is the home for your final report. Once complete, please erase this copy block and the Timer. 60 mins! GO!
The big idea “What if?”
Please use this page and its sections to summarize one Team's solution to a problem/challenge. This can be short paragraphs, bullets, sketches, etc. that provide a high-level overview of the product, idea, project, or concept. The purpose of this final report is to quickly and concisely communicate the main points to a reader.
Please take your time to craft a clear and compelling summary.
Opportunity
Use this block to remind the audience of the one solution the Team selected to build out to solve a problem/challenge “INSIDE OF THE HIGH SCHOOL CLASSROOM or IN THE HIGH SCHOOL EDUCATIONAL SYSTEM”. This should be a clear and compelling summary of:
- What problem/challenge is the Team trying to solve?
- What benefits does the solution offer?
- Why is this a priority?
- How do we create an AI tool that’s personalized to the student experience? In other words, student ideas first pedagogy.
- Traditional pedagogy is teacher do, student do with some help, student do on their own. XQ schools typically follow this model of asking students first what they think and how they would solve this problem? While that is slower at first than simply showing someone how to do something, you are developing a person’s idea to think for themselves and to tackle the next problem, which in turn, will make the learning more lasting.
- Great teachers succeed at student ideas first pedagogy, but media teachers do not. Research has shown that programs that implement these types of programs result in inexperienced teachers leaving students worse off, but when it works student outperform others in all kinds of exciting ways. Is that really different from the distribution of outcomes in a more traditional classroom? Shouldn’t AI make it more possible for teachers and student to succeed in a student ideas first Pedagogy? Therefore, AI shouldn’t give you the ideas first.
- AI measurement tool that’s adaptive and can propose content or materials based on student progress against competencies?
- How can we understand what students understand? Where is the student? What do they think? How do we invite students idea first?
- How do we get credentialing behind that?
- Assessment challenge if it’s not focused on student interest
- Can it mix in documentation, analysis, and forms of inspection
- Can AI be used as a tool to detect checkpoints on student progress
- If AI can scan through student work and give suggestions to teachers on what might be going wrong, that could be the key.
- The real challenge is to get students to care about assessment and take it seriously. For example, if we’re using a ten question quiz as a standard to train an adaptive AI model to recognize competency progression on student work, it’s very likely that students don’t do well on tests not because of a lack of understanding, but because they don’t care about the test.
- Need a balance between complex learning experiences that depend on a lot of competencies versus introducing traditional testing methods. Using traditional testing methods as a baseline ultimately incentivizes students to only think about traditional measures.
- Finished products are an easier way to identify and translate learnings obtained rather than early stages of discovery during learning experiences
- Should we separate use of AI into buckets:
- project delivery
- assessment
- conceptual understanding
- How can AI be used to build on top of the learning experience development process? Can we allows teachers to get real time coaching?
- Teacher dashboard to instruct and guide student progress and intervention
- Tagging student work against competencies is labor intensive work (across multiple domains, standards, competencies, etc.)
- Research shows that if you give a student a slider on a proficiency scale with no stakes attached to it, students are pretty spot on self-assessment.
- Time horizons in education research is so small
- Run a large scale model training experiment where we have a set of learning experiences that are pre tagged and gather all of that over a ten year span
- If we can better help teachers understand the opportune moments of intervention via some dashboard, that is what helps lift the median performance.
- It’s possible to provide teachers a set of information they’re looking at in real time as kids do work, but it’s really tough to rely on teachers to do the same tagging back to proficiencies. Ultimately, this is what makes teachers return to familiar traditional learning experiences that are already well known.
- Open source protocols
- Peer feedback is one of the most powerful pedagogical things to be doing — can be really messy if we incorporate this data to train AI models
- Regenerative AI use case categories
- Looms — a substitute for human labor
- Slide rules — make human labor more efficient but human skill still required
- Cranes — to extend what humans fundamentally do.
- Need more cranes
- Political obstacles with parents, teachers, admins to sell the “cranes” because at the end of the day, people don’t understand why it’s valuable
- How can we extend student interest using generative AI to reach intellectual depth?
- How do expand our ability to map evidence and different artifacts to competencies in PBL?
Product
Title: IdeaGen: Project-Based Task Library
SHORT TERM | ||
Activity associated to Output | Output Metric (objective) | Output Target (unit or %) |
Type here! | Type here! | Type here! |
Product Overview:
IdeaGen is an innovative idea generator tool designed to curate a rich library of highly instrumented project-based tasks. Its primary aim is to bridge the pedagogical landscape by offering a robust platform where educational frameworks can seamlessly interact and evolve. By leveraging a structured and well-mapped approach, IdeaGen fosters a dynamic environment where tasks are not only aligned with existing educational frameworks like Common Core but also easily tethered to emerging frameworks like the XQ Student Performance Framework.
Core Features:
- Dynamic Task Library:
- Generates a small (at first) collection of project-based tasks, each meticulously structured and instrumented to encourage holistic learning and assessment.
- The library serves as a repository that educators can delve into to find projects aligning with their teaching objectives.
- Adaptive Framework Mapping:
- Facilitates seamless mapping between tasks and various educational frameworks.
- Ensures projects are relevant and adaptable to both traditional and emerging educational models.
- Data-Driven Insights:
- Using embedded assessment and other machine learning analysis of student work, these tasks offer robust reporting capabilities, providing insights into student performance and project effectiveness.
- Data reports reflect BOTH traditional metrics (state standards and practices) and reflect the metrics and outcomes relevant to the selected educational framework, and on the XQ framework.
- Framework Economy:
- Creates an "economy" of educational frameworks by allowing tasks to be aligned with multiple frameworks simultaneously.
- Appeals to a broad spectrum of educational models, from traditional schools adhering to Common Core to new project-based learning institutions exploring competency-based frameworks and attempts to create conversation and collaboration across boundaries of traditional schools vs. innovative schools (because they can look at student work).
- Competency Alignment:
- Aligns tasks with specific competencies aimed at fostering a competency-based learning environment.
- Provides clarity on how each task contributes to the development of particular competencies.
- User-Friendly Interface:
- Simplified user interface ensuring ease of navigation and task selection.
- Features a search and filter functionality to find tasks aligned with specific frameworks or competencies.
- Collaborative Environment:
- Encourages collaboration among educators to share, rate, and provide feedback on tasks.
- Supports a community-driven approach to enriching the task library. But with some standardization about the quality of instrumentation in this particular task library so that it can do this work between traditional and innovative schools. And over time more projects could tie into these tools.
Impact:
IdeaGen is envisioned to be a cornerstone tool in modern education, providing a fertile ground where traditional and emerging educational frameworks coalesce. By fostering an adaptive and collaborative platform, it aims to propel project-based learning to new heights, ensuring students receive a well-rounded, competency-driven education, irrespective of the educational model their institution adheres to. Through its data-driven insights, educators are empowered to make informed decisions, tailoring their teaching approaches to maximize student engagement and learning outcomes.
- Resources
- Discuss the team, technology and other inputs required to make this happen. This could touch on:
- Core technologies needed
- Time to develop
- Team and/or skillsets required
- Large expenditure considerations or estimated budget
- Use a generative model to help curate well-instrumented tasks that are useful to traditional systems and innovative schools (similar to El modules that turned into full PBL experiences)
- Traditional AI to save time tagging project evidence in projects against competencies and listen to student conversation to help provide real time feedback
- Traditional AI models need a narrow scope that’s domain specific to start before we abstract the learning model
- It’s best to focus on AI products rather than the AI model itself
- Is this a teacher dashboard?
- What does student work look like in a shared task? How does this help cross share in traditional school models and project-based schools?
- Identifying competencies and opportunities for assessment have varying degrees of challenges depending on the subject or standard. XQ Math example was used because it was decided to commit to 80% of standards
- Student design, real world, or hands on projects are typical buckets of well-instrumented tasks
Product
- Database of tasks for PBL schools with custom tagging systems available against competencies
- Schools come up with their own tasks and they have good infrastructure to train models
- Experts build based on experience and create the exemplar tasks
- AI engine generate the instrumented tasks with tagging enabled
- Traditional research to launch interventions
- Knowledge base language model to create your own personalized model
- Can AI evaluate the level of rigor in a learning experience?
- How can we make an easy interface to generate the last mile project given a number of parameters?
- Different schools want to tag competencies differently
- Can the AI model read text, scan discussions, and understand reflections on student work to track the learning process?
- How can this be useful to the traditional system and useful to the innovative schools?
- If well-instrumented tasks start off with an introduction to linear equations and link to a bunch of tasks, then we have algebra.
- Can we abstract it over time to different subjects of math, and then layer on different abstractions for subjects?
- Key focus on AI as a product and not the training model
Title: IdeaGen: Project-Based Task Library
Product Overview:
IdeaGen is an innovative idea generator tool designed to curate a rich library of highly instrumented project-based tasks. Its primary aim is to bridge the pedagogical landscape by offering a robust platform where educational frameworks can seamlessly interact and evolve. By leveraging a structured and well-mapped approach, IdeaGen fosters a dynamic environment where tasks are not only aligned with existing educational frameworks like Common Core but also easily tethered to emerging frameworks like the XQ Student Performance Framework.
Core Features:
- Dynamic Task Library:
- Generates a small (at first) collection of project-based tasks, each meticulously structured and instrumented to encourage holistic learning and assessment.
- The library serves as a repository that educators can delve into to find projects aligning with their teaching objectives.
- Adaptive Framework Mapping:
- Facilitates seamless mapping between tasks and various educational frameworks.
- Ensures projects are relevant and adaptable to both traditional and emerging educational models.
- Data-Driven Insights:
- Using embedded assessment and other machine learning analysis of student work, these tasks offer robust reporting capabilities, providing insights into student performance and project effectiveness.
- Data reports reflect BOTH traditional metrics (state standards and practices) and reflect the metrics and outcomes relevant to the selected educational framework, and on the XQ framework.
- Framework Economy:
- Creates an "economy" of educational frameworks by allowing tasks to be aligned with multiple frameworks simultaneously.
- Appeals to a broad spectrum of educational models, from traditional schools adhering to Common Core to new project-based learning institutions exploring competency-based frameworks and attempts to create conversation and collaboration across boundaries of traditional schools vs. innovative schools (because they can look at student work).
- Competency Alignment:
- Aligns tasks with specific competencies aimed at fostering a competency-based learning environment.
- Provides clarity on how each task contributes to the development of particular competencies.
- User-Friendly Interface:
- Simplified user interface ensuring ease of navigation and task selection.
- Features a search and filter functionality to find tasks aligned with specific frameworks or competencies.
- Collaborative Environment:
- Encourages collaboration among educators to share, rate, and provide feedback on tasks.
- Supports a community-driven approach to enriching the task library. But with some standardization about the quality of instrumentation in this particular task library so that it can do this work between traditional and innovative schools. And over time more projects could tie into these tools.
Impact:
IdeaGen is envisioned to be a cornerstone tool in modern education, providing a fertile ground where traditional and emerging educational frameworks coalesce. By fostering an adaptive and collaborative platform, it aims to propel project-based learning to new heights, ensuring students receive a well-rounded, competency-driven education, irrespective of the educational model their institution adheres to. Through its data-driven insights, educators are empowered to make informed decisions, tailoring their teaching approaches to maximize student engagement and learning outcomes.
Ethical Considerations & Risks
Use this block to share more about what we should address before launching this solution, both ethically and morally. Responsible technology begins with acknowledging technology is not inherently neutral. We need to deliberately consider all the potential impacts, good and bad, direct and indirect, that a solution that involves technology can create.
- Responsible tech
- Who else needs to be at the table to validate the efficacy of the solution?
- Are there data privacy concerns?
- Are there ethical considerations or biases that need to be raised?
- What are the worst-case scenarios if this solution is developed?
- Technical risks
- Are there any data challenges?
- Are there any engineering challenges?
- What bottlenecks could exist in development?
- How long until students or teachers understand or know how to game the system?
- What are second or third order effects of its widespread adoption? Success of tools is highly dependent on getting competencies right, getting learning experience right, etc.
- Are we measuring differences in teaching levels or student socio-economic status?
- What stakes are we placing on teachers and students to impact student outcomes?
- Should AI models tag student work or student reflections and give actionable feedback to teachers against student progress?
- What does the teacher student feedback loop look like and due process consideration around how we override AI generated output?
- Should AI just be used in pre or post application? Great thing about AI is that it unlocks new possibilities and learning opportunities. For instance, we may not have teacher folks how to organize and extract excel data, but we can jump right in to asking interesting questions about it because of tools like code interpreter
Technical risks
- Potential technical constraints if data isn’t clean enough for data model training — a baseline 10 question quiz is a great for AI models in order to create an adaptive measurement tool
- Can we rely on expert opinion to identify some standard or competency to train AI models?
- If we can document teacher reflection, could we automate the tagging of student evidence to use to train the AI model?
- How do we capture student work using AI?
- Key moment: mind shift from training AI models on baseline 10 question quiz (data is clean to train predictive models) to training AI models on examples of student work against competencies
- Challenge is that we don’t know what high quality student work looks like against competencies. This could potentially be solved for by using expert opinions to manually tag this data (i.e., does this behavior count as using X competency?)
- Potential data sources: student goal setting, student reflection, teacher feedback, teacher reflection
- AI may be wrong in detecting and giving meaningful feedback to teacher and students because it can’t understand the level of guidance teachers are giving students. Teachers may be incentivized to give that feedback or not report it.
- The use of technology needs to have a much tighter scope to be effective
Landscape & Rollout Model
Use this block to share more about:
- Position
- What are the current market trends, and are there trends the solution is capitalizing on?
- What does the current solutions landscape look like, and where does this product fit in? Similarly, why is this product different?
- Total addressable market
- Who does the solution serve and how big is that population?
- Who are the other potential stakeholders who will interact with or be affected by the solution and how big is that population?
- Distribution Plan
- What is the path to adoption to ensure this gets into the education system? Into the hands of the user? Are there specific stakeholders that need to be involved?
- Who are the first adopters (students, teachers, administrators, parents, policymakers, educational institutions, etc.) and who comes later on?
- What media channels exist that would be well suited to market the product?
- Tasks are open to the public
- May become more project based or more traditional, but should still be same content, same tasks, same standards (100 hour math project versus 1 hour math circle)
- Examine student outcomes across traditional and innovative schools
- Create a large distribution channel
- Influential project-based schools agreed to use them
- Will this be enough to stimulate the market
Appendix
- Acknowledgments
- Glossary of terms
- Academic papers
- Educational toolkits
- Code repositories (if applicable)
OPTIONAL EXERCISE: Building with Measurement and Efficacy: Guiding the Implementation of AI in Evidence-based Educational Solutions
Aiming to strike a balance between fostering efficacy and ensuring agility in the development cycle of potential software innovations, this exercise invites participants to foresee and strategize the integration of efficacy-boosting tactics, all without curbing innovation or overtaxing resources.
Measuring Your Innovation's Effectiveness
Keep the learning objectives of your innovation forefront during this exercise. Embracing responsible design requires pinpointing and evaluating the elements of your innovation set for testing. Below is a template illustrating how to align your objectives with measures of effectiveness, steering towards responsible design and continuous refinement.
Provocations for reflection: What metrics will inform the effectiveness? How will you collect and interpret data to infer effectiveness?
Exercise:
1. NEAR TERM Efficacy Objective Identification:
What specific outputs do you plan to track and measure in the short term to evaluate the progress and success of your innovation during the immediate period (within 3-6 months of deployment)?
For all intents and purposes of this exercise, we define an "output" as the immediate results of an innovation’s activities, tasks, and actions. Output metrics are specific, measurable, and tangible.
Propose meaningful metrics that you can reasonably achieve and are ready to be held accountable for.
- Purpose: To identify the key efficacy objectives, propose functions to achieve them, and determine the frequency of these functions."
SHORT TERM | ||
Activity associated to Output | Output Metric (objective) | Output Target (unit or %) |
Type here! | Type here! | Type here! |
2. MEDIUM-LONG TERM Efficacy Objective Identification:
What medium- to long-term outcomes do you anticipate your innovation delivering for learners beyond in the medium term (within 9-18 months of deployment)?
How do you plan to measure and evaluate these intended outcomes to track the success and impact of your innovation over time? This exercise defines "outcome" as the specific and measurable changes, effects, or benefits that will result from the implementation of your innovation. Within this theme, focus on how your innovation will build toward your GOALS! This will be specific to the project, but for example, you could inquire about how to measure the tool's impact on career path selection or describe how it will help learners develop self-efficacy in relation to educational and career goals.
Propose meaningful metrics that you can reasonably achieve and are ready to be held accountable for.
- Purpose: To identify the key efficacy objectives, propose functions to achieve them, and determine the frequency of these functions.
MEDIUM TERM | ||
Activity associated to Output | Output Metric (objective) | Output Target (unit or %) |
Type here! | Type here! | Type here! |