Assessing ABET Outcomes for Materials Programs

[MUSIC PLAYING]

Hello, and welcome to

another episode of Undercool,

the materials education podcast.

So today, Steve and I are

hanging out in the office.

And we're going to talk

about the acronym that

is no longer an acronym.

It's just a name all by itself.

That's right, ABET.

Maybe once upon a time

it stood for something.

But we'll get into the

history with Steve here.

But today, we're going to talk about

ABET, what it's good for,

what it's useful for, what

it might not be good for,

and how it can inform what we

do in our materials programs

and help us teach better.

So Steve, to kick things

off, why don't you just tell us,

what is ABET and why

should materials departments

care about it?

So ABET used to stand

for the Accreditation

Board of Engineering and Technology.

Now it's just ABET.

And it's because they do

more than just engineering

and technology.

They do applied stuff as well.

But whatever, it's

really the same organization.

It's a federation of 30 to 40

professional societies

that get together and

they make bylaws and set up

all the rules and

regulations for doing accreditation

for engineering schools.

And ultimately, it's about making sure

that the people who are

designing your bridges,

your airplanes, your boats, actually

know something about what

they're supposed to know.

So as a country, as a

society, as the world,

we can actually trust the engineers

that our educational

institutions are putting out.

That's the big reason.

And it's even

sanctioned by the United States

to do accreditation for all US programs,

engineering programs.

But ABET also performs many

international accreditations

at the request of

schools in other countries

because it's become

kind of the gold standard

for engineering accreditation.

And other countries want to have--

I mean, it's kind of funny.

They want to say, we have the same

accreditation as MIT does.

So for whatever

that's good for, but we'll

get into that later on.

So why do MSE programs care about this?

Well, they should care because we usually

have a few students who want

to go into careers where they

must become professional engineers.

And as you know, Tim,

the TMS has a committee

that sets the

professional engineering degree

for materials.

It's really metallurgy mostly.

And although I don't think

more than 20, 30 people a year

take that test, it's

there in case they need to be.

Where would they need to do this?

It would be companies like Exponent.

These are consulting companies that often

do a lot of litigation.

When something goes wrong, in some

company or some person,

who's another company or

another person, goes to court.

And in court, there are all these rules

about who's allowed to

testify as an expert witness.

Anything involving the government,

only people with accredited--

with professional

engineering are going to count.

But it's also to impress the jury.

So the jury is going to want to know

that it's a professional engineer.

And guess what society

is also a member of ABET,

the Society of Professional Engineers.

And a long time ago, they insisted that--

well, hey, they make their own rules.

If you want to be a

professional engineer,

you must have graduated from an

accredited engineering program

or have 11 years of

equivalent experience.

So do we really want our students

to spend 11 years

before they can start a job?

They might want to start

right after in our department.

Maybe 1% or 2% of our

students every year want to do it.

So we do it for them.

And it makes parents and

alums feel really good.

So those are the two main reasons

why we partake in accreditation.

OK, yeah, it definitely makes sense

about wanting to give

students the opportunity

to earn certifications, to have access

to different career

paths, different opportunities

down the road that

ultimately rest on the program

that they graduated

from, having demonstrated

a certain minimum

quality of the education that

is being provided there.

So that seems reasonable.

Now, my understanding

is that to get certified

as an ABET accredited

institution, part of the process

is having ABET, essentially,

inspectors come to the program

and do an on-site physical visit.

What do those visits look like?

And what's the hardest

part of preparing for a visit?

What do you have to do to get ready?

That might not be obvious.

So that's a great question.

And first of all, it's

very important to realize ABET

doesn't accredit

institutions or departments.

They only accredit programs.

So they look at the

degrees that are being granted.

And in our case, we have

one material science degree

that says a bachelor's in material

science and engineering.

That's a program.

And that is what is accredited.

And that's all ABET accredits.

So what you need to do to be accredited,

it's a little different the first time,

but we've been

accredited for so many years.

I wasn't even here when

we first had accredited.

I've been involved in some, but I

think you need to have graduated at least

one student before you

can be accredited.

And when you go for your first

accreditation visit,

you're visited by two

evaluators, not one.

So that's a little different.

But for most of us in most programs,

it's about 125

accredited materials programs

in the country.

And the vast majority of

those have been accredited before.

So what you need to do, your

institution will call up ABET

and say, hey, we're

ready to be accredited.

We're going to start the process.

Here are all the

programs we'd like to do,

because ABET does a

visit for all the programs

at an institution at the same time.

The first thing you need

to do once all that-- that's

all Dean stuff, so

don't worry about that.

First thing a program needs to do

is start six years before that moment,

because they really need to

be preparing for this right

from the day after their last visit.

And I'll get back to that.

So what you need to do is create what's

called a self-study.

And ABET gives you a template to fill in.

ABET is all focused

around their criteria.

They have eight criteria.

I'm not going to go through all of them.

The most important

ones are the first few.

So criteria one is

all about the students.

Where the students come

from, how are they advised,

what kind of tracking do you do, how do

you handle mental health

issues, how do you do

admissions, all that stuff.

And the second

criteria is all about program

educational objectives.

And these are those statements that

talk about what a graduate

should look like a few years

after they graduate.

What's our aspiration for that?

And those don't even need to be measured,

because some of our aspirations are we

want them to be creative.

We want them to solve

the world's problems.

And how do you

actually measure those things?

But it's important to

have high aspirations,

because that kind of

drives the whole thing.

I actually believe that the objectives

are the most valuable part of the ABET

process for a program.

It's also one of the easiest

parts to be compliant with,

because all you need to do is consult

with your constituencies

at least once every six years, and ask

them, are our objectives as written,

meeting your needs as

a constituent group?

Our program has three

constituencies, the students,

the faculty, and our alums.

Some programs get crazy, and they say,

their constituents are the universe.

Well, how are you

going to ask the universe

what their needs are?

You know, we just can't

get to some of those planets.

So that would be a very

unwise thing for a program to do.

It's great to just have three.

It works.

But it's so important

that you meet with them

and document that thing.

That's actually easy to do.

The third criteria is the simplest,

because ABET says, here

are the student outcomes, one

through seven.

And you can add more,

but what crazy programs

can add more and add

more work to their plate?

So just do what ABET asks you to do.

You just list it.

Those are your outcomes.

I think there's a table.

You show how they're

related to your objectives.

That's easy to do.

So that's easy.

It's the fourth criteria

that usually stumps everybody.

And that's the criteria

for continuous improvement.

In that-- and it's very

short if you read the words

the criteria.

It's not much language.

But it says that you have

to have a outcomes assessment

process that is performed regularly.

And in ABET, that

means at least two cycles

during the six years

between when you started

to when you get accredited.

And it must be appropriate.

And that's a big catch word.

That could mean literally anything.

Yes, it can.

So those are the two words that usually

catches most programs.

And so it really all comes down to,

how are you going to assess

the outcomes for the students?

How are you going to do it

in a way that is regular?

And how is it appropriate?

So after being on many ABET visits

and going to many ABET symposiums

and even being on the board of directors

for ABET for a while,

I've gotten a good sense of what they

actually mean by that,

even though it's not

explicitly written down.

So my takeaways for a lot of this

are that what ABET

really cares deeply about--

and you'll get this from

anyone from ABET you talk to--

they care deeply about

continuous improvement.

This all came from ISO 9000.

And it turned into the EC2000, ABET,

all of a sudden, with these words.

But basically, they

changed ABET dramatically

around the year 2000, where they wanted

to be more like what industry does

for their continuous improvement.

And there's a lot of good ideas in there.

I have my own personal beliefs.

I don't believe students in education.

It's like a product

that's on an assembly line.

So it's a little harder to measure

students' achievement

of outcomes than it is

to measure quality control

on the dimensions of a

part, the hardness of the metal,

all those things.

Those are very easy to measure.

Measuring learning is really hard.

And actually, none of us really know how

to do it in a scalable way.

So that's the hardest thing.

But what ABET cares

about is they at least

have a process that makes a

very serious attempt to measure.

And what ABET cares about

is you must, in the criteria,

it says that you must

measure the extent to which

the graduates of your

program have achieved the seven

outcomes.

Now, let's unpack that.

The extent to which ABET doesn't say

anything about that everyone

has to have an A. They

just say your program needs

to know how well your

graduates are doing so they can use

that information if you need to to

improve your program.

It's a diagnostic.

The other thing is the word "graduates."

They're not looking for

learning about what students

at the first or second year are doing,

because as you know, it's hierarchical.

You build on the knowledge.

And the kinds of things

and the outcomes are--

the outcome one is to solve science and

engineering problems,

complex engineering

problems, which usually

means open-ended problems.

That's not something we

expect our freshmen to do.

It's something we expect

our seniors to be able to do.

So it really makes-- because of that,

it makes little sense to do

assessment in the first or second

year if all you care about is

meeting the objectives of ABET.

Now, there's lots of people who might

want to do assessment

earlier for their own purposes,

and that's fine.

And in fact, the mantra--

at least it used to be--

I hope it still is-- at the TMS

Accreditation Committee,

used to be improve your

program for yourself first

and worry about ABET later.

Because if you're doing a good job

improving your program

for yourself, you should have no problem

documenting your processes

and showing ABET that you did it.

The converse is sort of

a fool's errand, right?

To just do it for compliance alone is

kind of wasting your time.

If you're going to do

something, make it meaningful.

So that's what you

should really be doing.

But at any rate, this is

the number one criteria

that shortcomings are

delivered to that

there either is no good process or the

program only did it once

instead of at least twice,

or that it's not appropriate.

And what they mean by

that is either doing it just

running through the paces, but it's not

giving them any information.

The last part of the criteria says

that you must use the results of the

measurements, the assessment,

as input to your

continuous improvement process.

And you must have a continuous

improvement process as well.

The last line of that

criteria is the most valuable.

It says you can also use as input

anything else to improve your program.

Now, this has been

debated by many people at ABET,

but ABET has made it exceedingly clear

that the word input is

there and not output.

So if you do your assessment and you

analyze your results

and you show that everyone is doing great

above whatever threshold

you decide is important,

then you may not be able to use the data

you collected to

actually improve your program.

And that's OK.

Because then you can

bring in those other inputs,

information from your advisory board or

from industry partners

to get that information instead.

And so we have really good students.

They come in, they all take our intro

engineering course, Engin 100,

which is really a fantastic course.

It's really an English course, a

communications course,

but it's cast in a framework of a design,

build, test environment,

teaching freshmen how to do engineering

design right from the get-go.

So our students who take that, they learn

how to work in teams,

they learn about ethics, they learn about

design, they learn

about doing experiments.

They do everything that our

ABET outcomes ask us to do.

So by the time they come in our

department, in Michigan,

they don't join our program

until they've had at least one term.

Wow, they're amazing.

And I remember the days before Engin 100

when it wasn't that way.

I'm sure they were dark times.

Yes. And so things have really gotten

better in the sense of at

least outcomes two through seven.

Outcome one is the toughest outcome.

Outcome one is using engineering, math,

and science to solve

complex engineering problems.

That's difficult.

That's the one we beat

on the students hard.

We have very high expectations for them.

And so the scores for that outcome are

always lower college-wide

for all of these things.

So all that's kind of cool.

So at any rate, you know, that's what's

the most difficult part.

And so we've developed a whole new

approach to doing this.

We started this six years ago.

We actually started in our department

earlier, but we've rolled it out and are

now doing it on a college-wide basis,

automated, trying to follow some basic

principles to make sure our process is

completely sustainable.

And so this is the process for measuring

the student outcomes.

Yes.

OK, so from the instructor side, I can

see my perspective on it.

But I want to actually come back to

something else you said earlier, which is

the priority order of what to do for

improving the program

versus what to do for compliance.

And when I'm thinking about making

changes, hopefully improvements to my

courses, I'm always saying, well, what do

I really want to do with my class?

And then looking at the ABET outcomes

and saying, is there an outcome that this

change I'd like to make

happens to be well tied to?

And if so, to me, that's a good indicator

that it's something worth pursuing

because most of the ABET outcomes are

pretty transparently things that I think

we should want our students to do.

We should want them to consider societal,

economic, environmental implications of

their engineering work.

We should want them to design experiments

that have scientific validity.

These are just good things to do anyway.

So as I'm thinking about my courses,

anytime I can point to an ABET outcome

and say, by the way,

I'm also achieving this.

In addition to just doing what I believe

is good teaching, that's always my anchor

for how to make those changes.

And that's great.

There's kind of two viewpoints of this.

There's the ABET viewpoint that they

believe that they're driving all of our

education by mandating these outcomes.

But, you know, their

outcomes are kind of motherhood.

It's kind of obvious.

And I'd like to think that our program

actually has many more outcomes beyond

just what ABET requires.

And I think any good program will, of

course, do all the things ABET wants.

But the real improvement of the programs,

at least my experience here in Michigan,

has never really come from doing the

assessment of the outcomes.

The real improvement comes from people

just like yourself having that good

attitude of trying to do what's best for

our students in a broad range of

areas.

So we have a very robust undergraduate

committee that reviews

constantly our curriculum.

We do curricular reviews apart from ABET

because we think it's important.

We try to have meetings where we put

together people who teach Thermo and

Kinetics and ask, are you getting the

right kind of

background in your students?

What's missing?

And you know this better than anyone

because you're teaching a math course

because we're not getting students coming

into our Thermo and our

Kinetics courses with enough math.

And so we're going to tailor the math

they need to supplement from what the

math department gives them so they can

perform better in our Thermo

and in our Kinetics courses.

Another example of massive improvements

in our system of our program didn't come

out again from ABET but came by the work

you did that we talked in another podcast

about your alloy

design module in the lab.

You worked with the people who taught

Thermo because while they're taking

Thermo, our students are taking your lab,

whether you're learning how to use tools

like ThermoCalc and design their own

materials, their own alloy.

Using those thermodynamic principles

they're learning in Thermo actually make it

pour dog bone test specimens and test it

and then try to understand why it didn't

work because it rarely works.

But you know it will once you get really

good at it but you've

got to start somewhere.

So how is that captured

in our assessment results?

You need to go way beyond just

assessment. You need to you know be

creative and innovative and you need to

inspire a culture in a program of the

faculty caring about

their undergraduate students.

And I am so happy to say I think we've

got an amazing culture here

at Michigan doing just that.

So the way I try to document that I know

that every faculty member innovates in

every single course they teach.

And so when it comes time to write the

self-study I put a note out to the

faculty asking them to write me half a

page to a page of what they're most proud

of in developing in

the previous six years.

And that's the I think the best part of

our self-study because it's honest it's

from the heart. It's the real actual

improvements that all

these individuals do.

Often with the help and support of the

undergraduate committee or other things.

But what a great way I think to document

all of that and to really demonstrate

that we're doing massive improvements to

our program all the time.

And it's coming from what the students

tell us because we have we have town hall

meetings every year with

our students to hear comments.

We heard a comment last term that the

BioMed course is actually too biological

and not enough material science.

And guess what we're acting on that.

And so Brian Love who also agrees with

that assessment even though he's taught

the course before is actively trying to

change that course now.

We also get comments from our external

advisory board who are people in industry

who are giving us heads

up about things they need.

We get information from our alums with

whatever careers they followed.

So it's you know it's a lot of good stuff

that we use to improve our program.

But I view this outcomes assessment

process which we're mandated to do is

actually a critical part of this process.

And the way I think about it is these are

the diagnostics that if we don't do we

can get ourselves in serious trouble just

like engineers design that little

material inside your brake pads.

To start squealing when

the brake pads get small.

You need that there even if you change

your brake pads

religiously and never hear it.

You want to know before something bad

happens that it's about to happen.

And that's how I view our outcomes

assessment at the very end.

I talk about the

future of what we can do.

I think we can even learn more from our

outcomes assessment.

Well let's get to nuts and bolts for a

minute here because we have this big

picture aspirational vision of here's

what all this assessment accomplishes.

Right. Here's what it helps

our program get better at.

But there is still

that implementation layer.

The actually doing it which might not be

obvious especially if a program is trying

to step up their game in terms of having

an easier more efficient time with

checking the ABET boxes.

While doing great teaching.

So how do you actually

do outcomes assessment.

How is that

implemented at a practical level.

That is absolutely the critical question.

We all know the biggest problem with

doing outcomes assessment is

getting our faculty to do it.

The only people who can really probe the

students in an efficient way where we

distribute the load of the work is to

have every single

instructor of any course.

That we're using to assess our students.

They have to do the work.

So it comes down to some fundamental

concepts to make a build a process that

everyone will participate in in the

easiest possible way so

that it becomes sustainable.

And we do it all the time.

So the first thing is although a bit you

know they don't tell you how to do this.

They just tell you have to do it.

I've seen a lot of suggestions and I

don't like most of them.

Like ABET, symposia.

When they say oh well you could do

outcomes one and two this term and

outcomes three and four the next term and

then new outcomes five and six the term

after that and do that and then cycle and

by the time you're done six

years you've done it two times.

The problem with that is everyone forgets

what they were supposed to be doing

because we're creatures of habit.

So we need to do all of the outcomes

every single term in my opinion and find

a way to make it as easy as possible for

our instructors to actually do the work.

So that's what I first make it easy for

the faculty and instructors

so that actually gets done.

The next thing you know we need to do is

how do we do this and

also make it meaningful.

So what we've done is we built and a lot

of programs do this by the way I think

people have learned over the years.

So it's pretty standard now.

We build a matrix of we look at all of

our required courses and all of our

elective courses and we assign we try to

assign no more than one

outcome to any given course.

Now we can't do this for all courses

because we lean very heavily into our

design and our lab courses because those

are the courses where it's more

appropriate to measure things about

teams, communications, design designing

experiments, ethics.

So those are outcomes two through six,

two, three, four, five and

six. That's five outcomes.

But luckily we've got two lab courses and

we have two design courses.

So we're able to break it up so they only

have three outcomes to do.

Some programs wait until the last year

and dump this all, all seven outcomes on

the capstone design course, which makes

sense because you're measuring with our

graduates the extent to

which they've learned it.

But it makes it

unsustainable in my opinion.

Right. It's even more work for the person

who's already doing the hardest class.

Exactly. And that's just not fair.

So we've broken it up.

So our idea is let's collect a relatively

sparse data set, but do it every single

term so that we end up with a massive

amount of data after six years.

We do this 12 terms.

And I have to say it kind of works well.

So that's the first thing.

And we also only do assessment in our

junior and senior classes.

We assess outcomes one

through six in our required courses.

So regardless of path, every student is

assessed in our required courses because

they all have to take it.

Then our elective courses, because we

want to distribute the load again, we

have outcome seven is done in all of our

which is lifelong learning, learning,

different methods of learning, all that.

And so we have all of our elective

courses measure that outcome.

So regardless of which elective courses

our students take,

they're being assessed.

So we do assessment of all outcomes

across regardless of path

for all of our students.

So once we've done that,

we need one more thing.

How can we make this even easier for our

for our instructors?

And that has to do with the actual

mechanics of how they

first they have to create.

We expect our faculty to create the

assessment measures.

We have a choice.

We could hire assessment professionals

and have them tell us what

the assessment should be.

Or we can ask our faculty.

I happen to think our faculty are better

positioned because

they're teaching the course.

They know what they want

their students to learn.

And after all, don't we want all of our

students to be able to solve complex,

open ended problems?

And all of our faculty do that.

So they should find something within the

context of their course.

So it's meaningful for them for their

course parameters, but use that to

measure the particular outcomes.

So what I've done is I

made a series of videos.

One short video for every single outcome,

explaining to the faculty good, good

approaches to build

an outcome assessment.

And that assessment measure, this measure

for the assessment, it

can be a homework problem.

It can be an exam problem.

It can be an activity that's not even

graded for the course, whatever the

faculty member wants to do.

But if it's outcome one, for instance, it

must involve a complex problem.

And I show them the definition of what a

complex problem is to ABET.

And simply say, if you look at that, it's

really just an open ended problem.

And our faculty are great at

writing open ended problems.

So every faculty member, and they usually

only have to do this once, because if

they teach the same course over and over,

they can keep using the

assessment tool if it's good.

And they can talk to other faculty if

they're inheriting a

course and borrow theirs.

That's OK.

But in each case, we have some faculty

member to make a document where first

they write down what is the

outcome they're assessing.

Because you want it first

and foremost in their minds.

Then what is the actual assessment?

And they write down that problem in great

detail, exactly what the student would

see, what you're

asking the student to do.

And then finally, after that, they write

a short little paragraph explaining why

they believe this is

an excellent outcome,

a measure of the outcome

they're trying to measure.

And again, that's just to make sure that

it's present in their minds and they've

actually thought about it.

Once they do that, they put that in a

document and make a PDF.

And at the end of the term,

they upload that document.

So they'll get a link.

This comes from our college because we've

developed this system

across all of our 12 programs.

And the college has our matrix and the

every program has an ABED coordinator on

the ABED coordinator for our program.

So every term I review the list of who's

teaching what courses that

are being used for assessment.

I confirm that with the college.

They send a special link to each

instructor with everything pre-filled.

So what term it is, what course they're

teaching, their name, all that stuff

that's all in there.

It's done for the faculty member.

All the faculty member has to do is

upload the PDF of what was the metric.

And they have to upload

the scores of the students.

This is where it gets tricky.

So we want to know the actual unique

name, the student

identifier for every score.

This is going to be critical for another

onerous thing that ABED has made us do.

ABED demands that we

disaggregate the data.

How does that mean?

That means that we can only consider

students who are in our program when we

do the analysis of our results.

They don't want to contaminate it by a

graduate student is in the class, by a

student from another department.

I still don't understand

why, but we have to do it.

If we know the unique names of every

student, we know that information.

We have a big data

warehouse that's got all that stuff.

And in the back end, we can easily filter

the students who are in our program and

not making a faculty

member do that is insane.

Because if you have 60 people in your

class, you don't know who they you don't

know what program they're in.

And you don't even want to

know what program they're in.

You want to treat them all equally.

You don't want to have a bias.

Oh, they're graduate students.

They should do more work or oh, these are

students from, you know, from.

Well, I won't say the name, but that

other major that we like to pick on, you

know, that we know we're going to you

don't want to know that you want to know

that there are students in your class.

So and plus, it's really hard.

A faculty member is going to have to

download, you know, look up

what program they're all in.

What a pain.

It's easier to just do your

whole class and dump it in.

And now we're

launching a new way to do this.

So if you have a spreadsheet, you just

highlight the two columns, the unique

names and the score.

And we ask all faculty to put in a score

normalized zero to 100 so we can combine

the scores with other courses to see how

they're doing across an outcome.

They just highlight

those things and copy.

Go to a line. I never knew you could do

this in a Google form right in the line.

You just paste it and it's kind of like

comma separated values.

But in the back end, they use regular

expressions to parse the data to put it

back into a spreadsheet.

We're doing this because these

spreadsheets we were getting from faculty

in the past were all over the map and it

became really ugly for the back end

people to deal with this data.

So this should be

much better at any rate.

The way the reason it works, because the

first thing you might think of is, oh, my

God, that's a FERPA violation.

That's student records. You can't make

that public. And we don't.

But the only way it works is because our

university signed a legal agreement with

Google to make sure that the instance of

Google that we have is FERPA compliant.

So we're allowed to use Google mail from

the university to talk about student

issues and all of that.

If your university doesn't have that

agreement, you'll have

to find another solution.

But it works really well for us. And that

student identifier is incredible, as

you'll see in a few minutes.

So that's all. But from a faculty's point

of view, they just give that assessment,

collect that data, upload a PDF of their

metric and upload the student data.

And they're done. It takes very little

time. And believe me, I remind everybody

to make sure they do that assessment so

they have that data.

And then it works. And after we've done

this now for six and a half years now, we

have 100 percent compliance, except the

very first year, one of our 80 year old

faculty members who didn't understand

Google Forms didn't do it.

But that's pretty good. Yeah, I have to

say on the user side.

Being tasked with three outcomes per

semester due to the lab class, it takes

me an hour, maybe an hour and a half tops

to do this once a semester.

And as you said, I just I

have my assessment items.

They're already in our

learning management system.

So I just grab the student scores,

normalized to 100, grab the student IDs,

paste columns in a sheet,

upload to our collection form.

And I'm done. It's really quite painless.

Yeah. And from the ABET coordinator

point of view, I just go to the Google

spreadsheet that they build and I can

instantly see who's

done it and who hasn't.

I can click on the documents

and check them very quickly.

And if there's a problem with a faculty

member, I just go visit the faculty

member and talk to them

and help them make it better.

So it's great. So the really cool thing

is when it's all done, there's this

program called the A BET.

It's called Tableau that

I'm sure many places use.

It's, you know, a database program that

ingest spreadsheets and you can have you

can build a user UI that does different

filtering on what you see.

But the data that we care about, what we

want to know is we want to be able to see

a histogram of the actual values of the

scores that our students got,

where every single piece of

data is on that histogram.

In the old days, we used to report an

average and we would say if the average

is above this number, we're good.

And of course, A-BET evaluators through

no fault of ABET, but because evaluators

like to come up with new stuff, started

saying, yeah, well, what

about the standard deviation?

And what about the modality? And it's

like, yeah, well, what about it?

And, you know, eventually it's going to

become part of ABET evaluator lore that

if this isn't done, you're going to get a

shortcoming and there's

nothing you can do about it.

So how do you protect yourself against

what I call the ABET virus that rapidly,

you know, because, you know, 13 people,

visitors on a team hear

about some great idea,

which has nothing to

do with the real thing.

And then the next year they go to 13

different programs and they

infect all those new teams.

Well, when I did this, it was and all of

a sudden it's growing

exponentially like a virus.

So how do you protect

yourself against that?

Well, you know, we're scientists.

We know that the best thing to do always

is just to look at all the data.

And so what we plot is the histogram of

all the data so we instantly can see what

the standard deviation is, what the

modality is, if there's skewness, we can

look at the tails, you know, all of that.

We just see in an instant.

And in a way, it makes the analysis of

the data very straightforward because you

just look at it and you can tell.

So what we do is for each outcome, some

way of a spreadsheet and the

spreadsheet has little boxes.

And you first you choose what department.

And when you choose the department, you

can choose which

outcome you want to look at.

Then you choose which terms and you can

check boxes and choose, you know, like

one term or one academic year or two

academic years or

three, whatever you want.

And you'll pull in all the data.

Then you'll see for that particular

outcome, here's this

curve, which is a histogram.

And there's no curve drawn.

It's just the actual data.

And it gives you an average.

We have to filter out zeros sometimes

because when students drop a class and it

still shows up in the

instructors grade books.

So the zeros usually are not meaningful.

Sometimes they are, but whatever.

And then you do the magic because, like I

said, ABIT doesn't accredit departments,

they accredit programs.

And your department might have a course

like we have our structures course MSE

350 that has a huge number of students

who weren't in our program in that course

because we offer a minor.

And one of the requirements for the minor

is our structures course.

So we have students from biomedical

engineering, from aerospace, chemical

engineering, from mechanical engineering

all over the place that

are not in our program.

So when we have another checkbox that

says which program do

you want to filter by?

And we can choose our material science

and engineering undergraduate program.

And you'll want you click that button and

you watch the numbers drop because now

you're only getting the

students from your program.

And it allows us to then prepare these

histograms so that I can go to a faculty

meeting and I do this once a year.

And I show the entire faculty how are our

students doing on outcomes one through

seven for the last academic year of just

students in our program.

And we look at the data,

we discuss it and off we go.

And that's continuous improvement.

We've used it as input if

it's not going to help us.

That's OK. At least we've considered it

as input to our program at

the level that's appropriate.

So that's what we do.

Yeah. Well, the process sounds great.

I love. I actually enjoy that annual

faculty meeting where we look at the data

is we're visually interrogating this

database of student performance in a

variety of different ways.

And it's quite interesting to see where

the program is doing particularly well

and where there are

improvements to be made.

So I'm a fan, but that's just my opinion.

Would you say that this method is working

from the ABET point of view?

Are they satisfied with the process that

you've described here?

Yeah. Well, we just had our ABET review

last fall and we have 12 programs that

are accredited and 10

of those 12 got NGR.

No general. I forgot.

NGR is perfect. No shortcomings.

And so, yeah, I think it

worked really, really well.

Were there problems? Yeah.

That's why the two programs that got

dinged for shortcomings got dinged.

And guess what? They deserve to be dinged

because they just adopted our process.

But forgot a really important part.

They forgot to document how they did

continuous improvement of their program.

They just said, we did this

and it's not going to help.

That's not OK. You must show continuous

improvement of your program.

And they forgot to do

that. Now, they had done it.

They just didn't document it.

So they had to go back.

And of course, both of these programs had

pretty major

curricular reviews and changes.

And they just didn't

document it or write about it.

So it made them complacent because it

worked too well, I would say.

So don't get complacent.

You still have to make sure that you

document and

continuously improve your program.

And this is not going to

get rid of that part at all.

Well, it sounds like the process works as

long as you follow the

process, which sounds almost

tautological. I'll have to

logic that one out for a minute.

Well, how I think about

that, let me ask this.

You've mentioned that we sort of started

this process six years

ago, seven years ago now.

What do next steps look like for our

continuous improvement of our process for

continuous improvement?

How are we going to get better at

inoculating ourselves against ABET?

So when I look at these histograms, I

think when anyone

looks at these histograms,

although we have something like 90% of

our students are above

our threshold for minimal

acceptance, you can graduate from the

University of Michigan with a 2.0 GPA.

That's like a 60% for the

way most of us grade exams.

And so that's our minimum level because

that's just being honest,

because a 2.0 can graduate you.

But when you look at the data, there's

always a few stragglers in

the tails that are below that.

And you have to ask

yourself, what about them?

And so this has actually been a good

process by looking at

these histograms over and over.

It's made me think about what about that

group? Even at our

university level, we are

very, very proud of our graduation rate

at University of Michigan.

Our five-year graduation rate is like

93%. That's the envy of the whole world.

Only a very few schools can say that. I

remember going on college

visits with my daughter and the

people standing up and saying, "We are

very proud of our five-year

graduation rate. It's 65%."

And I'm like, "What? How can you be proud

of that?" And I came

back, and when I talked to

some of the deans about that, they said,

"Yeah, we're unusual."

Even with our amazing 93%, and by the

way, some of the 7% are

accounted for by people who dropped

out of engineering but still ended up

graduating from other

colleges at our university,

there's still the stragglers of the 3% to

5% that do leave our university.

I just went to a second provost seminar

on teaching where the

provost stood up and

talked about this and said, "What can we

do about that 3%? How can we

make sure that they graduate?

That should be our focus." And why

shouldn't that be our focus in our

program? And maybe we can use

the ABET data we're collecting to help us

understand that

because guess what? We know

who those people are. We haven't accessed

that data yet because

it's a little complicated.

You still, I believe, we would need IRB

approval to actually do a

study. I'm not sure. I'm going

to have to find out the rules. But a new

tool just came available

that might let us easily do this

without hiring education researchers or

data specialists. And that tool is MAIZY.

MAIZY is U of M's generative AI bot.

That's a private

generative AI bot system.

So it means we can look at that data

without exposing student

data to the world. We can stay

FERPA compliant. MAIZY has a way to link

to a SQL database, which is what our

whole data warehouse

is built on. And so I've been talking to

those folks and we're

going to see, can we ask the

generative AI bot questions like, the

students in the tails for outcome one,

you know, can we do a longitudinal study?

Because we have several

years of data from when there are

sophomores until they graduate. Do we see

them improving? Do they

graduate? What happens to

those students? Can we talk about race

and gender? Can we talk about first

generation students,

Pell Grant students? What can we learn

from the data of the

students in our tails?

That could significantly help us improve

our program. And I'll

tell you what, if we did it,

I'm not putting this in our self-study

because I'm not doing this

for ABET. I'm doing this for

ourselves because of what I said way

back. First, we want to improve our

program for ourselves and

worry about ABET later. But I think it's

a wonderful exercise, a

wonderful way to use our data.

I should also mention that our Center for

Research on Learning and Teaching,

CRLT, which is solely focused on

improving education in Michigan, when

they saw our system,

they came and asked me, can we use your

system where we put in,

you know, our own questions?

Because they're doing curricular revision

pilots with a few

different programs or departments

because they do care about departments.

And so they're going

to be using our system

for their purposes, even though it has

nothing to do with ABET. So I think it's

really, really cool.

We have a lot of data. We've got data

from, you know, 13,000

students in this last pass.

In any given term, we're looking at 3,000

students. That's a lot

of data. And it's only

growing because I think other programs

have learned how valuable

this is. So that's what we do

for our ABET outcomes assessment. And

it's made getting

through criterion four a breeze.

I really love how this program

essentially started as, let's make sure

bridges don't fall down.

And now we're at a place with it where we

can interrogate this

really rich, robust data set

to ask questions about equity in our

programs and to say, how can

we better serve the students

who we are not currently serving well

enough? I think that's

fantastic. Well, that is all the

time that we have for today, but that was

a pretty good day one deep dive into

ABET. I think we might

have to revisit this topic in the future

now that we're asking interesting

questions about what we

can do with everything that we've learned

from these self-studies and

how we can use it exactly,

as you said, to make our own programs

better for ourselves

and for our students.

And I should mention, so, you know, I

agree, Tim. I'd like to,

you know, maybe bring on,

you know, some people from ABET. I'd like

to see if Jeff Fergus

can talk to us. Jeff Fergus

has probably been one of the most

influential people from the

materials community at ABET.

And I really think his principles and

values align closely with

mine. You know, Jeff was the

head of the EAC, the Engineering

Accreditation Commission. He's been in

charge of training for

ABET. So he's very, very well known at

ABET. And so I think it'd be

great to get his ideas because,

you know, one of the reasons we're doing

this podcast is to try to

share best practices with

our materials community. And to that end,

I hope everybody comes to

the North American Materials

Education Symposium. There's the plug.

It's a plug because on

Friday, I'm going to be doing a

free workshop on, I will show everybody

our outcomes assessment

stuff in detail. You get

to play with our Tableau system and I'm

available to give, you

know, to answer any questions,

not from a official ABET standpoint,

because I don't have any

official standing in ABET,

but just from a colleague in materials

who would love to help all of our

programs get through ABET

with a minimal amount of work and do a

maximal job on their ABET.

So come to our symposium,

get your chair to send you and we'll help

you with ABET for free.

Excellent. Well, thanks for being in the

hot seat today, Steve, and

to everyone else out there in

the world. We'll see you next time on

another episode of

Undercooled. See you later.

Assessing ABET Outcomes for Materials Programs
Broadcast by