Visionary Series – Alina Von Davier

We sat down (virtually) with Alina Von Davier, the Chief Officer of ACTNext at ACT. ACTNext is an R&D innovation hub at ACT focused on digital learning. Her thoughts and insights into education in the post Covid era were stunning and insightful. Here’s the transcript from the interview edited lightly for clarity.


Ryan: Everything changed in February 2020 with Covid 19. This is particularly true in education, which has been reeling to adjust. What has your team at ACT been doing over the past few months in response?

Alina: So there have been a lot of changes and good things that ACT put in place to respond to this crisis and help the students. For example, ACT provided a set of free resources for students, for parents, and for teachers. Additionally, ACT Next, the team that I lead, has been a big contributor to this offering of free resources. The resource we put together for science is called HERA, Holistic Educational Resources and Assessment, and we provided that for students and parents, and teachers. We’ve already received quite a lot of interest in it. And then ACTNext developed ACT ALEKS, which helps people prepare for the test and learn more about the test from their dining table. So we’ve been quite active at ACTNext trying to be responsive and support the students in this difficult time.

Ryan: In order to truly move the ACT testing experience online, what technologies need to be in place to make this a reality?

 Alina: There are many, obviously. You need to be able to generate content quickly, ideally automatically. You need to build item banks of content for tests. You need to have that content properly calibrated with respect to the difficulty on the test. So that’s one area that would require technology. It could be done by humans, but it’s just extremely hard to scale up.

Then you have the course remote proctoring capabilities. Given that a large high-stakes test has about half a million students being tested at one time, the capabilities that you need to have in place for that many students at one time are quite different from the capabilities needed by the university who just has one or two classes of a few hundred students. You need a blend of artificial and real-life videos and streaming from the students with the human proctors. So that’s a scalability matter. It has never been tried at this scale and at this level of high-stakes testing.


Ryan: We’ve heard from other education thought leaders that Covid 19 has essentially taking where education was headed anyway and accelerated it. Do you see this to be true?

Alina: Yes, definitely. I mentioned to you that ACTNext was created with exactly that goal in mind. To help the company transcend from paper and pencil to a digital environment where capabilities driven by machine learning and data. So we’ve been working on many of these capabilities already. Everything that is happening now is actually a validation of the type of work that we’ve done so far, and the quality of work and the type of capabilities and infrastructures that we develop.


Ryan: If you had a crystal ball, what does education look like in our post-Covid future?

 Alina: Well, I do think that, of course, we have to look at the short-term and the long-term. So I think short term we’ll see more hybrid models; an integration of technology-driven pedagogy and subject matter expert teaching and learning.

 Also, as a society, we need to identify our role in supporting schools and districts to make sure that every child has access to the internet so that online learning is available. So I would bet on this hybrid model with technology-based learning and personalized learning. Having kids sit for eight hours of classes on video chat isn’t realistic.

The design of the curriculum, the design of the teaching itself will need to change. Can you use simulations? Can you use games? We’ll need variation in the offering of content and in the offering of opportunities where we actually try new things, something more than one-way directional teaching from the instructor to the student.

I think also the assessment itself will need to change and need to be better connected with content tagged to a taxonomy. And also, to be honest, I think we need to look at micro-learning with micro-assessment. So there has been some progress made on micro-learning, but almost none on micro-assessment. And I think it’s time for all of us to work together and figure that part out.

I don’t really see the point of having anything that is not appropriate for an iPhone or any other type of smartphone. I just don’t see that anymore. We need to go where our students are. And if that’s what they use as their primary medium for pretty much everything, I think we should meet them where they are instead of forcing them into all antiquated environments. “Go and sit in a classroom. Go and sit and take a test for three hours. Go and sit and listen to someone talking to you for three hours.” I think we need to change all of that. And it seems that now people are more receptive. Everything I said is not necessarily new. I just believe that now there is better reception for this.


Ryan: What fears do you think parents have around blended or online learning models?

Alina: Well, we don’t know yet how well younger children will deal with sitting in front of a computer for a long time. And we don’t quite know what the long-term effects of this type of interaction are. Also, I would make a distinction between older learners and younger children in this environment. The more independent the learner is, the more appropriate the online modality is. And as you go towards the younger age, you probably need a human to be part of that experience. So I can definitely understand the concern of parents of younger children around this.

Children still need supervision and working parents cannot provide this. Yes, working from home is a very strange concept at the moment because for many parents, it’s working from home, taking care of the children at home and so on, so it’s not just working from home. I think many parents are overwhelmed by the responsibility to supervise and evaluate, and make sure that the children actually receive the right education. They are concerned there might be gaps in the educational development of their children. Having that responsibility on you as a parent is quite significant.

Parents are probably a bit skeptical on the quality of some of the digital offerings too. There are very few companies – ed tech companies – that have research-based systems in place, very few that have papers that have been peer reviewed, a very few that have demonstrated efficacy for their tools and therefore, the parents are left with a myriad of offerings. Some of them are free, but many without rigorous research-based validation. Yes, they might have recommendations from teachers and from parents but that’s different from large-scale validation of methodologies and tools.


Ryan: Let’s talk a little bit about technology. As an industry, what technology hurdles do we need to overcome to make online learning truly engaging for a wide variety of learner ages and types?

Alina: Big companies like ACT, College Board, etc… have been around for many years and have built their data and infrastructures in ways that supported their success in the past half-century. And many of them leverage different providers for different products. And these providers have different ways of managing the data, naming the variables, and aligning the data. So here we are. In theory, there’s a lot of data out there, individual data about students.

But in practice, if we want to leverage algorithms and take advantage of the technology to make the best predictions for students, it’s difficult to pull all of this data together. Most companies in the testing industry have a data governance deficit, and that needs to be addressed.

Now extend that to schools, and you find the same problem. Most schools do not have good data infrastructure. Of course, there are exceptions everywhere, but most of them don’t. Sure, a district might create dashboards but if you dig in, you find all sorts of problems with the governance of the variables and the data. So it’s very hard to align databases with schools, from schools and school districts, which is of course a complication.

Another area that’s very complex is how do you provide quality content at scale? This is a serious problem for all of us. You can use crowdsourcing and take some risks with that. And depending on the purpose, those could be fine. However, if you go for high-stakes assessment with crowd-sourced items, that’s not going to work for a lot of reasons, from test security, but also to the quality of the items themselves. If you have highly trained test developers, that is very expensive. It’s a very expensive process and it’s very hard to scale.

Then we have Open Educational Resources, which I think is a wonderful opportunity for all of us to figure out how to use more. But for this to be effective, we need to figure out how to measure efficacy for these resources as not all of them are high-quality. We need curation, an efficacy index to help us figure out what is valuable and what isn’t.

And then of course we have automatic content generation which is what my team has been working on. I have an artificial intelligence and machine learning team and they’ve been developing a platform and the capability that is called the Sphinx. It’s a machine and human collaboration. And the machine learns from the humans as it goes through the process of generating resources. So that was the second problem, generating content.

The third problem that I think is still quite complicated at scale is, to create proper personalized systems. Adaptive and personalized. And why is that difficult? Most of the ed-tech companies that build learning systems have been only using predictions for their recommendations. And that’s fine in some cases, but not in all cases. Especially not in education, where a lot of factors are changing all the time. What is lacking is a psychometric framework. Ed-tech companies would need to borrow this theoretical framework and embed tools for reliability, validity, and generalizability into their systems. This isn’t as straightforward as people seem to think.


Ryan: What resistance, if any, do you see in the education community to a digital/hybrid world?

 Alina: Yes. There is. I like to make an analogy to medicine, but I’ll get to that in a second. In education, pretty much all the changes that we want to bring to life are received with skepticism. And most of this skepticism is about comparability to the past, especially when we talk about measurement or assessment. If you want to come up with a new assessment, new item types, more engaging item types, then the pushback that you get from policymakers is, “What about trends? Can I compare exactly the results from this test, this new modern version of the test to the old one?” And the fact that those measures need to be comparable seems to be very important to many policymakers in the United States, at least.

So back to medicine. If we introduce a new medicine that can save lives, nobody will push back saying, “Well, but this is unfair to those who died last year.” And yet, that’s exactly what we do in education, especially around assessments. We place too much value on the trend and comparability to the past in all of our assessment design. We need to be thinking how to synchronize our assessments to the current age and the current way of thinking and working.


Ryan: Any final thoughts?

 I absolutely feel optimistic about the future of education. We are seeing an acceleration of trends, and I think we’re optimistic, we just need to be humble and collaborate. It’s not that technology is here to replace teachers and upend hundreds of years of pedagogy. Technology is there to help the teachers, and we want teachers to be users, and become very knowledgeable about technology, That’s the ideal state.



See All Resources →