FDA/NIST Sponsored Workshop

In Vitro Analyses of Cell/Scaffold Products

December 7, 2007

National Transportation Safety Board
Conference Center
429 L'Enfant Plaza SW
Washington, D.C.

These transcripts have not been edited or corrected, but appears as received from the commercial transcribing service. Accordingly, the Food and Drug Administration makes no representation to its accuracy.

Printable Version (PDF - 435 KB)

Note: Documents in PDF format require the Adobe Acrobat Reader®. If you experience problems with PDF documents, please download the latest version of the Reader®.


Welcome
Anne Plant, PhD (Moderator)

Session 2

Fred Heineken, PhD
Ken Giuliano, PhD
Lani Wu, PhD
Dan Martin, PhD
Andres Garcia, PhD
Michael Sacks, PhD
John Elliott, PhD
Robert Nerem, PhD

P R O C E E D I N G S

DR. PLANT: Good morning. I think it's time for us to get started. I'd like to introduce myself. I'm Anne Plant from NIST, National Institute of Standards and Technology. And, of course, we're co‑sponsoring this workshop with the FDA.

I hope that you‑all are enjoying this. This is I think really productive workshop already. And today will be something a little different, where we're going to focus on some of those challenges that were identified yesterday and how to approach some of the measurement challenges that occur in the issues of tissue engineering, both in R&D and also in what might be useful for regulatory purposes.

I'm leader of a group called Cell and Tissue Measurements at NIST in the Chemical Science and Technology Laboratory. And NIST has a number of measurement science functions that impinge on tissue engineering, both in our laboratory and in the materials science laboratory, particularly in the polymers division. Some of those people from NIST are around today as well.

So hopefully, you'll get to contact and communicate with folks at NIST if you're interested in doing so. And if you'd like more information, please let me know. Give me your card, and I can send you more.

Without any further adieu, we'd like to start our program today. Before we get into some of the more analytical aspects, we're going to warm up this session with a presentation from Dr. Fred Heineken of the NSF on some of the work that has gone on recently among the different agencies within the federal government who have technologies and interests in tissue engineering.

Fred is going to talk about the strategic plan that was put together by the federal agencies for tissue engineering to help advance tissue engineering, particularly through funding and research within the federal agencies.

Dr. Heineken received his BS degree in chemical engineering from Northwestern University and has a PhD in chemical engineering from the University of Minnesota.

He worked for Monsanto for five years where he did enzyme production research, and then joined the University of Colorado, did research on respiratory physiology and taught chemical engineering, and then joined another industrial laboratory, COBE Laboratories, where he worked on human dialysis research and product development.

So he's had a lot of very diverse experience, both in industry and in academic research. And then after nine years at COBE, he joined the National Science Foundation as a program director funding biotechnology and biochemical engineering in the engineering director of NSF.

He's recently received the NSF award for emeritus service. And in addition, I'd like to say that Fred has really been a key player in the development of tissue engineering and funding for tissue engineering, as well as sort of helping the whole field find itself and identify what tissue engineering is, and has been really a facilitator in workshops and in bringing the community together and helping to create the field of tissue engineering.

So it's with great pleasure that we have Fred Heineken, who's the chair of the Multi‑Agency Tissue Engineering Science Inter‑Government Working Group, to give us a summary of the strategic plan for tissue engineering.

DR. HEINEKEN: Thank you, Anne.

Good morning. It's nice to see you‑all here so early in the morning. And thank you to the organizers for inviting me to talk about the Multi‑Agency Tissue Engineering Science.

As you see up here on the slide, we call it MATES, the interagency working group, which put together a strategic plan advancing the tissue science and engineering. There are copies on the table in the back. We had a few copies available yesterday. If you haven't received or gotten a copy yet, there's some more back in the entrance area.

So it's an interagency strategic plan. We have a number of different agencies that have participated in putting this plan together.

So tissue engineering as a term as far as we can determine was first coined in 1985 in a proposal to NSF on an engineering research center proposal. And since then we've had various conferences on tissue engineering, the first being at Granlibakken on Lake Tahoe in 1988, where the first definition that we had for tissue engineering was generated.

So as we've defined it through that workshop at Lake Tahoe, the application of principles and methods of engineering, and the life sciences toward fundamental understanding of structure/function relationships in normal and pathological mammalian tissues, and the development of biological substitutes to restore, maintain and improve tissue functions.

So that came out of the Granlibakken conference in 1988. There's a book published authored by Dick Skalak and Fred Fox that was a summary of the proceedings of that conference.

Since that conference, we've had various calls for proposals and awards in tissue engineering. And interagency contacts have sprung up through the biotechnology research subcommittee activities, the subcommittee activity of the National Science and Technology Council. In the strategic plan, we discussed regenerative medicine. It's an overlapping field with tissue engineering, tissue engineering science as well. And let me just read what we've got in the plan here.

So we look at regenerative medicines as self healing through endogenous recruitment of exogenous delivery of appropriate cells, biomolecules and supporting structures. And it's different from other disciplines by its focus on cures rather than treatments. And there's a HHS publication that's referenced up here, and you can see the URL for the website on that.

In the plan, we expand tissue engineering to tissue of science and engineering to give it a broader field of interest. And here we identify or define the term as the use of physical, chemical, biological and engineering processes to control and direct the aggregate behavior of cells.

So it's much a broader definition than we had for tissue engineering originally. And it includes advances in complex biological applications requiring input from the physical and chemical sciences. We're looking more at systems biology‑type approaches to tissue engineering. That is the computer simulation of cell behavior, and I look forward to advances in the way of looking at complex cell functions.

Here are the agencies that are participating in our working group. I won't read them all. You can see that we have a fairly broad participation, all the way from basic science, to more applied technologies, to regulation and approval processes, to reimbursement for the technologies that are to be put into practice. We feel it's important that all these factors are part of our interagency working group, to give heads‑up and early indications on new technologies that are coming along.

The working group itself was first established with a five‑year plan in the year 2000, although we've had contacts prior to that time. We've had other types of activities with various agencies that had an interest in tissue engineering, but we just ‑‑ we were first formally established with the five‑year plan in the year 2000, which the plan was approved by the subcommittee on biotechnology. It was revised and renewed in 2002.

And now since July of this year, we have what's called terms of reference. And that's also been approved by the subcommittee on biotechnology, one of the committees of the National Science and Technology Council. And our overall goal of the working group, as you can see, is to maximize the benefit of the federal investment in tissue science and engineering.

Some of the accomplishments we've experienced over the last five years, it's been referenced yesterday. We had a panel report on the comparative international assessment of tissue engineering. That's the WTEC study, the World Technology Evaluation Center study, which was published in 2002.

We have a website so you can find out what we're doing. The federal government and the tissue engineering, as indicated up on the slide there ‑‑ NIH has issued a RFA for tissue engineering in 2003 based on some of the information from the assessment that we had. There's a report on the emergence of tissue engineering as a research field, and all this information's on the website, if you want to look further into this.

We have an ongoing, right now funding opportunity announcement on enabling technologies for tissue engineering and regenerative medicine. So there are three submission dates each year for this funding opportunity announcement, FOA.

We had a workshop in February of this year on stem cell research for regenerative medicine and tissue engineering. Here we tried to get the tissue engineering people to establish better contacts with the stem cell folks, and it worked very nicely.

We had presentations on various tissues from a stem cell point of view, from a tissue engineering point of view, and from a implementation point of view, translational point of view. So it worked out very nicely, and the proceedings from that workshop are also on the website. And now we have the strategic plan that was issued in June of this year.

So why have a multi‑agency activity? In tissue engineering and tissue science engineering, the field is not the purview of any particular one agency. Certainly, we've been very active in the area at the National Science Foundation, and all the agencies that were listed on that slide that I showed you earlier have some sort of tissue engineering activity. And the idea here is to try to coordinate those activities in some way or another.

Tissue engineering and science will require a close collaboration with the physical and life sciences. So we need to get the various scientific people and engineering folks talking together more frequently. There's also a need for bioethics, logistics, pre‑market review, standards and patient reimbursement considerations. And the earlier this is all looked at in the research of tissue engineering, the better.

Recently in the last few years, the Office of Science and Technology Policy and the Office of Management and Budget have been issuing a guidance memo for interagency research and development activities. Among the activities that are highlighted in that memo is a deeper understanding of complex biological systems as a priority for interagency activities in the federal government. So that's another item that you may want to look at more closely.

Why a strategic plan now? Well, there's just a lack of tissues and organs to replace those lost to disease, aging and trauma. Heart transplants, kidney transplants, liver transplants, there's a shortage of these organs for people, and tissue science and engineering is seen as a possible way of mitigating that problem.

Another use of tissue engineering is to replace animal testing for various products, cosmetic uses and medical use of products. So use tissues and tissue engineering to replace animal testing.

Tissue engineering could be used to produce vaccines and other complex drugs. There's many scientific and engineering and regulatory disciplines that require some sort of integration. And fully functional tissue products will depend on accurate, reliable and reliable measurements of many scales.

So we've heard some very good discussions about biomarkers yesterday and the need to provide reliable measurements for tissues in determining how effective these tissues are. And that's one of the areas that we see as important to carry out in research.

Some of the overarching goals for the strategic plan, we need to understand better the cellular processes that are involved in tissues. We need to formulate better means of scaffolding and matrix environments. You heard a lot about the scaffolding yesterday for the various tissues that are of interest to tissue engineering.

We need to develop the enabling tools, mathematical modeling, as well as markers and sensing technologies to get a better handle on how to better design tissues for various purposes.

Keep in mind that when we talk about tissue engineering, we're not only talking about therapeutic uses. We're talking about replacing animal testing. We're talking about sensors based on tissues. So tissue science and engineering goes beyond the medical uses that many people associate with them. And we want to help along the process of scale‑up and commercialization of tissue engineering.

In the plan itself, we have eight priority areas, strategic priorities that address the goals that I just presented to you. So there's a desire to understand cell biology much better; to identify the biomarkers and assays to characterize tissues as you heard yesterday; imaging technologies, improving those, advancing those; refining the cell environment interactions and do some more with the computational modeling systems, the system biology aspects of tissues.

Can we get computers to simulate cells, tissue behavior, in a way that helps the advancement of the field and minimizes experimental needs to advance the field? If you can do it in a computer and you have the models validated, you can save yourself a lot of time, as many of you know.

One of the other priority areas is assembling and maintaining complex tissues. So we're talking about mixture of cells and how to design those cells, tissues. Then there's the need for tissue preservation and storage. So if you're going to have products of tissue engineering, you have to have some way of getting it out to the patients in a way that's useful and rapidly available. And we are also interested in application development and commercialization techniques.

Some of the implementation plans that we have in the plan include convening workshops like the one we have right here today and other types of workshops and conferences to issue agency‑specific funding opportunity announcements or interagency announcements of some kind or another, which we have interagency activity right now. That's an NSF/NIST/NIH joint announcement that's currently on the streets right now enabling tissue engineering technologies.

We want to promote interagency personnel exchanges through participation in other laboratories and postdoc programs and so forth, sabbaticals, to foster technology transfer and translation via SBIR, small business innovative research and other types of joint ventures. Coordinate the policy and development, especially participation in industry‑wide standards, the ASTM standards‑type activities. Exchange knowledge on living databases. So databases are a real issue here in trying to get information transferred among the various people interested in tissue engineering. And then to track R&D activity worldwide.

So some of the expected outcomes that we have in mind, as mentioned already, we look for additional conferences, calls for proposals and grants awarded, looking for publications in the field to further the advancement in the field, patents, entry of new companies in the field, FDA‑approved products, centers for CMS reimbursement decisions and evaluations of the state of the field worldwide and further interagency collaborations.

So those are some of the things that we hope will result from our interagency working group activities.

So thank you for listening to this, and if you have any comments or questions, I'll be glad to try to address those right now. Otherwise, there's further information available at the website that you see on the slide there. So thank you.

DR. PLANT: Thank you, Fred. We have time for one or two quick questions.

MR. DALY: I think this is a very important initiative to get interagency cooperation. It's particularly important ‑‑ tissue engineering ‑‑

DR. PLANT: I'm sorry. Could you please introduce yourself?

MR. DALY: I'm sorry. Mike Daly, Tigenics, Inc.

It's particularly important for tissue engineering, one component that quite often is left out, that we focus on the science, et cetera. But bringing these products eventually to delivery to patients or whatever, it's really important and relative to reimbursement and CMS.

And so is there any ‑‑ I didn't see it on your list of things ‑‑ try to get them involved in terms for reimbursement ‑‑ appropriate cost reimbursement for the technology to get them more onboard of understanding evidence‑based medicine, et cetera, those types of initiatives across these agencies as well as from the tissue engineering basic science perspective?

DR. HEINEKEN: Yeah, that's the objective. That's the goal, is get the CMS people involved right at the beginning and have them involved in the whole research and development process so that they are aware and know what the technologies are. And I think that will ease their reimbursement decisions on these types of products that are coming, yeah.

DR. PARENTEAU: Hi, Nancy Parenteau.

Fred, I looked at the proposal last night, the pamphlet. And what was missing is the history of what products, what has been done in tissue engineering. And I also realize from my talk yesterday some people didn't realize I was talking about an approved product that directly led to a profitable company. And that kind of surprised me, but it was my fault.

However, I think there's a lot to be learned from what we did well and what we didn't do very well. And I would have liked to have seen at least an overview of some of the things that Circe, Advanced Tissue, Advantagenesis (phonetic), Curis (phonetic), all of us that were in the field ‑‑ Genzyme, what have we in your view done properly or what's needed to go to the next step to be more successful.

And I would have liked to have seen some ‑‑ or have maybe a workshop or something in translational issues than can help people in the next generation of these companies do a much better job, say, than we did the first time around.

DR. HEINEKEN: Okay. We'll put that item on the agenda for our next working group meeting.

DR. HOPKINS: Richard Hopkins from cardiac surgery, Children's Mercy Hospital. One of the problems that those of us in the translational research field have faced is that the traditional RO1 NIH study groups have not been a particularly receptive place for this kind of research.

Are you addressing this with NIH? Do you have specific study groups to which tissue‑engineered products should go within NIH, and what are your recommendations on that?

DR. HEINEKEN: Well, there is the RFA. That's the FOA on enabling tissue science and engineering that's currently on the streets. It involves three agencies and six NIH institutes. So that is available. That is currently available. We ‑‑

DR. HOPKINS: But they still go to the same study groups.

DR. HEINEKEN: Well, we have tissue engineering at the National Science Foundation. And we just made five major awards in tissue engineering through what's called Emerging Frontiers for Research and Innovation, the EFRI process, five two‑million dollar awards in tissue engineering. So there are other options.

MS. SEAVER: Sally Seaver. Just a really quick question on your very nice pamphlet. Congratulations.

I think you've got in 2002 a five‑year charter to go ahead, and this is now five years later in 2007. Could you just update us on ‑‑ can you continue on what's happening just really quickly?

DR. HEINEKEN: That five‑year plan has been replaced by what's called terms of reference. That's the official document of the federal government to charter or legitimize what we're doing, and that was signed in July of this year for another two years or so.

MS. SEAVER: Thank you.

MR. RATCLIFF: Tony Ratcliff. Just to clarify the study section issue. Besides the regular study section, there is a study section dedicated to tissue engineering, as well as the regenerative medicine, as well as the study section looking for enabling technologies. So I think the agencies have been addressing that, at least in part, as well as tissue engineering going to the regular study sections.

DR. PLANT: Okay. Let's thank Fred again.

Okay. So now we're going to start the main focus of today's session, which is to discuss some of the trends for tools and strategies for quantifying biological response.

And I think what we saw yesterday ‑‑ and there was some just really excellent talks and some really good discussion. And it really struck me that one of the themes that kept coming up over and over again is how complicated things can be, how difficult it is to know what to measure, how difficult it is to measure anything, and then do we have the correct biomarkers and do we know what biomarkers should be measured.

So what we're going to try to do in this session today is look at what some of those tools might be, either tools that are directly applicable to tissue engineering or tools that have been maybe not yet applied to tissue engineering but could have application, or that are just beginning to be applied to tissue engineering and how they might be useful to tissue engineering.

And to just sort of set ‑‑ I just want to briefly provide a few comments to sort of set the tone of one way of thinking about this very big problem of how do we define biomarkers and how do we discover them. And I'm going to take a page from Anand Asthagiri, who's a ‑‑ he just visited NIST last week. And so I extracted from him the ability to steal some things off of his website for this purpose.

Anand is a systems biologist in the school of engineering at Caltech, and he focuses his system biology problems on developmental biology and tissue engineering. I think we heard a little bit yesterday about how these two fields intersect.

And I thought Anand on his website has a really clear and very simple way of thinking about the nature of this problem, the focus being for tissue engineering to engineer higher order structure and function, to take cells and organize them into structures that are complex, high order, and have some complex function.

And in order to do this, because this is a very difficult endpoint to try to get to, a challenging endpoint ‑‑ in order to do this, one really has to be able to predictably manipulate cell behavior. So you have to be able to figure out how to get cells to do what you want them to do so that you can achieve this complex endpoint. And that in order to do that, it would be really helpful to understand those intracellular mechanisms that drive cell behavior.

And I think I'd like to add for this venue that understanding these molecular mechanisms that drive cell behavior will not only help in the R&D process for developing hypotheses and testing hypotheses and trying things in a systematic way, but that also will help inform the regulatory process by providing some underpinning understanding of how you might have gotten to this complex structure and what its functions might be and what its fate might be.

And, of course, this is maybe the nature of the problem. is that intracellular signaling pathways in the cells are very complex. This obviously is just a small subset of what is known about intracellular signaling. Every day there are new intracellular signaling molecules being identified, and we don't know how many of them we have identified and how many are yet to be discovered.

Now, of course, that's not only part of the challenge but maybe the larger part of the challenge is that all of these pathways share components and there's a great deal of crosstalk between these different pathways. So it becomes a very complex analytical problem not only to identify and measure the specific biochemical molecules within these pathways but to understand how these pathways intersect with one another, such that if you alter one pathway, how that might have an unintended consequence with respect to another pathway. And so this is sort of a huge challenge.

Of course, for tissue engineering, one of the things that really has to be kept in mind is that the extracellular components ‑‑ and particularly with respect to this meeting, the cell scaffolding or the extracellular matrix has a huge role to play in terms of poising these intracellular pathways.

And so how cells respond to their matrix, the scaffolding or extracellular matrix is going to set them up for how they're going to respond to other extracellular materials hormones, growth factors, et cetera, in their environment. They're going to respond differently on some extracellular matrices than on other extracellular matrices. And how they're going to respond to mechanical properties will depend on what integrins they're engaged with their extracellular matrix with.

So these become very complicated issues. And so it's challenging to say, okay, what should we measure, and how should we measure, and how should we understand how to deal with this level of complexity. And so one of the things that we want to try to address today is how do you go about understanding this kind of a complex system and making it work for you.

Now, I'm sure that for small laboratories and independent investigators it's a really formidable task, and I think that that's exactly true. I don't think that every individual is going to be able to understand everything that they need to know ‑‑ to discover everything that they need to know to make their products work and to know what directions they should be moving and then how to ‑‑ what mechanisms are going to be involved when it comes to the evaluation process.

So this is really a team effort. And I think that that was one of the things that Fred was ‑‑ the point that Fred was making with respect to the interagency plan is that this isn't the kind of endeavor that any individual laboratory is going to do. It's going to require a pooled effort among the disciplines and among different laboratories in order to be able to ferret these complex interactions.

And so we do look to the future for increasingly interactions between the physical, chemical and biological communities and computational sciences and engineering in order to ferret out some of these things. And so that sort of is part of what we're going to explore today.

I'd also like to bring up that part of the effort of helping to provide this infrastructure, and to further understanding of these complex events, and what do you measure and how do you measure it is a new subcommittee through ASTM International, which is on cell signaling. And there are a number of people in this room who contribute to that committee, and I would encourage anybody else in this room who might be interested to please find out more about it and be contributors to this.

But part of the genesis of this committee is the realization that directing cells to migrate and differentiate and assemble in some desired fashion and have some desirable function requires some understanding of the intracellular pathways that go into directing these events. And that is going to require quantitative measurements of cell signaling biomarkers such that we can pool these data and really understand the big picture.

Again, this is going to aid both R&D, and it's going to aid quality assurance and quality control. And, of course, as everybody has said already, it's going to require more than one kind of measurement and it's going to require more than one biomarker in order to get an idea of where the cells are in their plan and be able to predict their fate.

So at NIST, some of the ‑‑ one of the things that is being proposed through the ASTM is a guidance document that sort of describes how you might go about making quantitative measurements, for example, at the cellular level. And this comes from work that's been done at NIST. And I think Dr. John Elliott will maybe touch on some of these things.

But one of the things that we have been trying to provide is methodologies for quantitative automated microscopy. One of the issues of automation, being to try to remove bias from the measurements to take data on large number of cells. And we could see that, as if you look at any particular field, you might get one view of things. If you look at lots of cells, you'll see that there are lots of different things going on. And so you can't really draw a conclusion based on one look‑see.

You want to be able to also quantitate things, parameters that have to do with morphology of cells. I mean we've heard a number of people talk about how cell morphology is probably a really important indicator. They're many parameters associated with cell morphology.

We don't necessarily know how that sort of way downstream phenotypic property relates necessarily to all of the details of intracellular signaling or what its impact might be. But certainly just by measuring morphology, we know we can get a very good robust measurement that will provide information from day‑to‑day, from lab‑to‑lab that will allow normalization.

Once you can query cell morphology, then you can also think about applying that same kind of approach to quantifying intracellular markers, like gene reporters, for example, or amino histochemical stains, and then start to then build up a dataset of parameters that tell you something specific about the biochemistry and the signaling pathways going on inside of cells.

One of the things, of course, that is really critical about automated microscopy is that you can measure large numbers of cells, and this is really critical.

There was a little bit of discussion yesterday about the difference between accuracy and precision. Accuracy is getting the right answer, and precision is being able to do the same experiment and get the same result over and over and over again.

This is an important concept, I think, for biological processes because everybody knows you look in a microscope and you see lots of different phenotypes from your cells. And no cell is identical to the cell next to it. And is this experimental error? It's really important to be able to distinguish what's experimental error from what is natural biological variability.

Any group of cells is going to have variable responses. Probably any tissue construct from sample to sample is going to have some variability associated with it. It's really important to know how much of that variability is biological in nature, is real variability, represents the real answer and how much of it is experimental noise.

And, in fact, there's a great deal of variability just in biological response. And you can see that from if you take enough data, you can see that there's always a distribution. Regardless of what phenotype you're measuring, there's always a distribution of responses.

So it's important to have good measurement capabilities so that you know what is the range of responses that you expect to see in your population. It's not all experimental noise, but you have to know what the difference is between experimental noise and biological variability in order to understand what the implications of that is. And, of course, there's information in that biological variability that tells you something about the processes, the mechanisms that are going on in the intracellular signaling events.

So we're going to have two talks today that are going to handle at the cell‑based level how do you take lots of cell data and then what do you do with all those data. How do you develop models based on those data?

And so, again, this is going to be information that is not directly applied in these talks to tissue engineering but is applied to toxicity and to other kinds of models to understand intracellular signaling pathways. And, of course, it's a small step to go from there to particular applications in tissue engineering.

There are, of course, other ‑‑ every scale of biology is important in this game. So I sort of started out by talking about the cell scale. But the tissue and the organism scale is also important, and it's important to have tools at those scales as well.

Can you apply these kinds of things, these kinds of analyses at different scales? And, in fact, this is a very recent example of applying DNA methylation to trying to understand a cartilage product and what is the cell type most predominant in this cartilage product using DNA methylation screens to evaluate this cartilage product, to evaluate what cells are, and whether or not there's been fibroblast overgrowth in this product.

Again, this is another example of collecting lots and lots of data and then using sophisticated analysis to understand the result. In fact, Buddy ‑‑ it was Buddy Ratner yesterday who mentioned using principal component analysis to understand materials properties. Well, materials being complicated, yes, cells being even more complicated, it's really important to have good statistical evaluations and analyses to understand all of these complex data.

Another technique that is going to be very important in tissue engineering and I think it's been alluded to a little bit already in this meeting is proteomics. And we'll hear a talk today about proteomics, again, not necessarily applied to tissue engineering. But there are plenty of examples ‑‑ or examples I shouldn't say plenty ‑‑ but examples showing up now of interest in applying proteomics to tissue engineering.

So basically, proteomics is what are all the proteins that are being secreted that might show up in a tissue or in the surrounding media of the tissue that might tell you what is the signature that is expected for this tissue. If it's operating normally or if it's functioning abnormally, these signatures might be very diagnostic to evaluating that tissue product.

Again, on the organism level, when you implant a tissue, what is the proteome that might be available to you in the blood? These are very complex analytical problems, both from the measurement point of view and from the analysis point of view. But they may provide signatures that give you a really good handle on how is that organism, how is that patient responding to the implanted material and is that normal response, is that abnormal response, and catching that early in the game.

In fact, I found that there's now a ‑‑ I guess that this is a job advertisement that I've taken some of the verbiage out of through the McGowan Institute and the Windber Research Institute that acknowledges that broad clinical implication of cellular therapy will require patient specific understanding ‑‑ we're talking here about personalized medicine ‑‑ and requires exploring the genome and the proteome of engineered tissues, and using bioinformatics in order to really advance the frontiers of regenerative medicine and provide a good assessment.

So it's tools like this that we want to address today. Again, we're sort of on the forefront of what tools are applicable and how to apply them. But at least we can start initiating a discussion here.

And I picked this slide to end on from Tony Ratcliff, where he sort of puts everything into one box. What are all the challenges? And clearly, it's obvious that understanding mechanism is a huge challenge and a very, very important component of being able to be successful at bringing regenerative medicine and tissue‑engineered products to market.

If you don't understand the biological processes, it's very difficult to define biomarkers and to assess patient response. And, of course, that idea of defining biomarkers is probably key to this whole thing. It's a function of what do you measure, how do you measure it, and how do you interpret what you've measured so you know how it relates to an outcome that you're interested in.

That's all I want to say with respect to introducing this session. So if you'll permit me, I'd like to go right on to our first speaker of this session who is Dr. Ken Giuliano.

Dr. Giuliano serves as the principal scientist at Cellumen, Inc., where he's involved in the development and integration of new cell‑based reagents and cellular systems biology profiling assays.

Dr. Giuliano was formerly a principal scientist at Cellomics, also an assistant professor in the department of neurology, neurological surgery and cell biology and physiology at the University of Pittsburgh Cancer Institute.

He's authored many papers, particularly on the use of advanced fluorescence‑based reagents, high content screening, and cellular systems biology. And he's also authored and co‑authored 15 patents. He received his PhD in biochemistry from Colorado State University.

So let's welcome Dr. Giuliano, and I'll try to get his talk up here.

DR. GIULIANO: Thank you, Anne. Thank you for inviting me to describe our technology, and I apologize ‑‑ letting me substitute for Lance Taylor, who's the CEO who really just wanted to be here but just couldn't today.

What I'm going to talk about is what Anne mentioned, is the cellular system biology. We talked some about system biology already. But what we do is use cells as the simplest system, and what I'm just showing you there briefly in the title there is a heat map and also some of the cell dynamics.

These are actually cells where the mitochondria have been labeled and they're treated with a toxin. You can see that the mitochondria potential goes down. So it's really time/space activity in defining cellular systems biology.

So what I want to talk to you about today is just introduce the company and cellular systems biology as we've defined it and take a little piece out of what we call cellular system biology and talk about the cytotoxicity profiling as an example application of CSB or cellular systems biology.

So this is very similar to what Anne just showed, just a little different order, where we start with the continuum of molecules, cells and tissues. And this is mainly where we work here at this interface, cells and tissues. But, again, the systems go all the way up to organs and whole animals. And we know that the single cell is the simplest system that we can define as a system because it has emergent properties.

So the challenge now is that now we know the cell is an integrated, interacting network. And what I'm showing you here is a tumor cell, a brain tumor cell where actin has been labeled, and I'm looking at the dynamics over time of the cell moving. And, again, a fairly simple function such as this, it involves gene expression, the integration with membrane receptors acting all the time, membrane pumps.

How does a cell attach to a surface? It has to be attaching in the front and releasing in the back so it can move forward. You can see some of those attachments being laid down. What signaling pathways are involved? What phosphorylations or other post‑translational modifications are going on so the cell can move?

And, like obviously, the cytoskeleton has to be coming apart and assembling at the right place and the right time for the cell to move; and what molecules are being synthesized and degraded, and how's it using energy?

So it's really ‑‑ what we're trying to attack it as, as an integrated, interacting network and how do we define that network and use it mainly in what we interact with pharma in terms of increasing efficacy and decreasing toxicity of potentially lethal compounds.

So how did we get to where we are today in terms of emergence of cellular systems biology where it started about a little more than ten years ago now, where if you wanted to do cell‑based assays in pharma or in industry, the cell population responses, there was HTS methods and there was whole plate readers.

And we were looking at cells in a whole plate, in a well in a whole plate. And you were looking at, as Anne just mentioned and I'm sure that Lani will talk about ‑‑ you look at the population average.

That came with essentially with the advent of Cellomics, and high content screening was automated cellular imaging. And there was the first generation of HCS readers that one to two cells, two features, were measured in the cells.

Then came multiplexed HCS, which was automated cellular images. Now three to four features were measured. Now we're starting to get a moderate amount of data. So that had to come with some kind of data management.

But now where we are and want to get to is cellular system biology now, where we optimally multiplex the cell and tissue imaging. And we need new reagents for this. And what we're doing now is generally measuring greater than ten features. And now we need much more automated data analysis, and that includes classifier informatics.

So it's really ‑‑ the cell is an integrated, interacting network. And the reagents and tools to look at ‑‑ to measure those processes and the informatics then to really simplify the interpretation of those data because otherwise you just end up with this mountain of data.

So the cellular systems biology approach has as its foundation genomics, proteomics and metabolomics. And what we've done is ‑‑ everybody has their own definition ‑‑ come up with functional biomarkers and use cell arrays and also tissue microarrays to look at the cellular system biology in single cells on really a two‑dimensional substrate, as well as cells in tissue microarrays.

So the idea is this cellular system biology profiling approach where you've seen a perturbant, like, for example, a compound being added. And you can see that as it cycles through the different things that are going on in the cell in terms of organelle changes, translocations within the cell, changes in transcription, for example.

What this does is give us a profile then, which is shown on the left of this, what we're calling proprietary cellular biomarker panels. And we get a readout of multiple parameters, multiplex parameters. But then also we have a proprietary profile database then with classifiers. And what it does is go through and match the profile that we get for a particular compound, for example, a particular perturbation, and then match that with our proprietary database, and then use the classifiers then to classify the response that we get using these multiple parameters. And I'll show you an example of that, how that seems to work a lot better than just doing these simple assays.

So what are the tools then that we're using for the cellular systems biology? So we start out with, again, like I told you, these cell models, which are mostly two‑dimensional but also patient cells and tissues. And what we've been doing is using existing imagers. So we're not developing new high content readers. We use existing ‑‑ the Cellomics and other platforms ‑‑ so we're platform independent ‑‑ to look at time, space activity within single cells.

So these are imaging systems that read single cells, multiple colors within a single cell. And, for example, 384 well plates is our main modality of analysis.

Reagents and profiles. So I won't talk much about reagents. But we have a series of reagents where we can manipulate the cell with, for example, gene switch proteins and siRNAs. For example, we can change the concentration of a protein and then measure the effects of that protein on cells. And that includes ‑‑ which I'll just show one example here of biosensors, of positional biosensors, that report their activity as a position in the cells.

So we can report activity such as kinases, proteases and also protein‑protein interactions using these biosensors and add those into the profiles that we're measuring for cellular systems biology.

So it's imagers, reagents. Also now, the important part then is this informatics and classifiers that we're building now with this huge amount of data that this generates. And you can imagine if you measure four colors in half a million cells at ‑‑ and I'll show you ‑‑ different time points, you very quickly get a very large dataset that you need to manage and be able to interpret the data out of it.

That's why we say that we can then build this systems knowledge, and then since it's mainly pointed to our pharma right now in terms of them being able to make decisions on efficacy and toxicity as early as possible on the discovery pathway ‑‑ I think that's what I showed, start here.

So how do we implement this cellular systems biology? And what we're attacking are really the three different parts of the drug discovery process, the early drug discovery, drug development and clinical trials. And our products then range from ‑‑ and I'll go into a little bit more detail on each one of these ‑‑ cellular models disease for early drug discovery, cytotoxicity profiling, which is an example I'll give you more information about for during drug development. And then for clinical trials, really patient sample profiling ‑‑ so we can take patients' tissue samples and help to initially really stratify them for clinical trials.

So in the drug discovery realm then, the cellular models disease is what I told you ‑‑ disease relevant cellular models, and the biosensors for measuring activities and the profiles, and then what we believe then, this improves the quality and quantity of lead compounds that drug companies want to move forward.

And really, the profiling tools ‑‑ and by using this high content approach with the biosensors, we can now start looking at targets that were really intractable before. And because we do collect information about the system, we can start flagging off‑target effects at the earliest stage during the drug discovery process.

For drug development, we have cytotoxicity profiling. So what we want to do, is the goal here is to really identify potential toxicity before entering expensive preclinical testing. So we can at least prioritize lead compounds to give the drug companies an idea of which drugs are much more likely to show toxicity. And then they're able to rank those before they move into those more expensive trials.

And finally, clinical trials as patient sample profiling and what we want to do really is to improve trial enrollment and really therefore new drug efficacy, so by stratifying patients and then using profiles with their own tissues.

This will also very close couple to what I showed you before, is make these proprietary panels of cellular biomarkers that can also be used as diagnostic tools. Our collaborative partner there is the Mayo Clinic. So let me just talk about this one example then, the CellCiphr cytotoxicity profile.

So as I told you a little bit before, then ‑‑ so the advantages, then, what we do is a systems biology approach now. We monitor multiple functions at multiple time points and multiple doses of compounds. So we can see it's a combination of all of those. And, of course, that leverages the sensitivity and throughput of high content screening.

What we've done is validate this now to high throughput screening standards. And that was a requirement that we had to do ‑‑ to enable pharma to accept this. And what we use is a 3D four wheel capacity plates and extensive quality control, especially on the data analysis side, classifier software now, to simplify the interpretation and predictivity of this cytotoxicity profiling approach, because we are profiling the system itself; not just look at the cell depth but give us some insights on the mechanism of action of the potential toxicants.

So the first panel we developed was in collaboration with Millipore. And what we attacked were several different ‑‑ what we defined as biomarkers then ‑‑ for example, a stress pathway activation, that's a biomarker. And then the feature then, for example, would be ‑‑ I'll show you on the next slide ‑‑ would be, for example, a stress kinase phosphorylation activation, organelle function, oxidated stress, DNA damage to the cell cycle, and cytoskeletal integrity.

What I've show you here are four different colors from the same field. So this is one of the plates as these four colors, which gives us one set of biomarkers. And the second set of plates has another four colors in the same well that gives another set of biomarkers, and there's more detail then.

For example, Panel 1 now was built with HepG2 cells, which are human hepatocellular carcinoma cell line that's widely used for toxicity measurements because it's easy to grow up a lot of these cells. And I'll show you some primary cell results that we also have.

For example, by measuring the ‑‑ we saw some images of Hoechst or DAPI labeling the nucleus. What we can measure are several features: how many cells are lost, a simple assay like that, or is there a cell cycle arrest going on or DNA degradation, what are the changes in the nuclear morphology? And there's another host of reagents now for measuring oxidated stress, which would be measured with the phosphorylation of a particular histome.

DNA damage response, we can look at tumor suppressor activation. We have a couple mitochondrial function assays, a mitosis marker, another measure of where in the cell cycle a compound may act. And a microtubule, we can measure microtubule stability in cells by the destabilization of the microtubules, by looking at the distribution of the component tubule and protein.

So that's Panel 1. Before I get into results, what we found is that there's a better result ‑‑ we get better results if we combine Panel 1 and Panel 2 data. So let me just tell you about Panel 2. Now, Panel 2 is designed with primary rat hepatocytes, which are much more functionally metabolic and, of course, they're primary and they do give you a different answer ‑‑ a complementary answer than the human hepato tumor cells, the HepG2s.

Again, multiple maximum of toxicity again, we measure 11 parameters. And we do this at three time points: acute, early, at 48‑hour exposure. So I did mention that with the HepG2s, those were acute, which is in about one to two hours early, which is 24 hours. And HepG2s, we go out to 72 hours for our chronic exposure. So we found that that's essential because compounds have different effects with these different time points.

So these are all fixed endpoint assays. Now, again, we have the four colors, two‑plate assay, where now we've changed out a couple parameters. And we look at apoptosis, peroxisome proliferation within the hepatocytes, phospholipidosis. And, again, mitochondrial functions is very important for primary hepatocytes as we learned yesterday. Stress pathway activation and cytoskeletal integrity again in this panel.

So how do we look at the data now from these two panels right where we're adding a compound going at three different time points adding ‑‑ again, the compound is added in a 10 concentration dose curve, and we're collecting four colors or at least 11 biomarkers out of each well.

So, for example, this acute profile then this ‑‑ now we can start color coding things to make it a little easier to visualize. This could actually be the response of a single cell. But one of the things we do is look for the AC50 or the EC50 response. And I'll show you some of the details there.

But this would be, for example, the color‑coded response of the early, acute and chronic profiles for a single drug at a single concentration. You can see very quickly then you could start building up a large array of these. And we show it here as a heat map in terms of known and unknown compounds that have been got ‑‑ that have gone through both profiles at different times.

What we'll show you here is that you could start clustering these compounds based on the CellCiphr profiles of knowns and unknowns. And then as shown yesterday in terms of classifying the scaffolds themselves, we can start classifying the compounds or the toxicants now in terms of these different biomarkers. For example, this compound C24 would fall into more of a stress kinase induction classification. So the idea is collect this mountain of data on time/space activity within the cells, cluster the compounds by function using known and unknowns and then classify those.

So we get an automated readout of the experiment. And then this just shows a little more detail where each of the compounds is tested at a ten‑point dose curve. And these are just some images, just a couple example images shown at each of those concentrations. Again, we found this very important because we've also found that drugs have ‑‑ compounds have different mechanisms of action depending on the concentration and the time that they're exposed to the cells.

So we measure 11 features each and measured in each cell, three time points, ten‑point dose curve. And then we curve, fit the data in this case. In this case, we tabulate the AC50 values. That's shown in B here. We could then build heat maps out of this data. And then this is ‑‑ I'll show you a little more detail on this. It's called a mountain map where we can actually start now comparing the response of particular compounds to the total set. So let me just describe then a profiling case study then within the cytotoxicity profiling that we've done.

So what we have, we've taken 137 compounds, and that's 101 unknown compounds and 36 control compounds. But we have drug safety data for 137 compounds. This was done in collaboration with our CHA partner. And what we did was use this human drug safety data and then score each of those compounds on a scale of zero, 1, 2, 3, 4 for in vivo toxicity. So now we have a known set of human toxicities on each of these compounds. And then we tested them in the CellCiphr HepG2 Panel 1 and CellCiphr rat hepatocyte Panel 2.

One of the results here then was that CellCiphr, the Panel 1 then, that showed similarity to known controls, the group toxicity rank order and a safety index ‑‑ are really our goals in terms of what do we want to get out of the profile.

This just shows you just one way of looking at the data. What you're looking across here are each of those across the top are the compounds and down the Y axis here are the parameters, the features that we measure. So this should be about 30 of those, and they're color coded by the AC50s, then, in terms of the cooler color's blue as a millimolar. Yellows and reds then move from micromolar to nanomolars. So the more potent or more toxic the compound, the redder it will show up on this heat map.

The way they're ranked across from left and right are what I told you. We have the in vivo toxicity data from those that are minimally toxic to significant toxicity, and also then there's some that didn't have any of the safety data at all.

This is one way to look at the data in terms of just looking at one compound. There's a couple of compounds that are boxed here, and then you can see the responses of each. But, again, that's just one way to visualize the data. It's not a really easy way to look at it.

Another way here, this is really a mountain plot, for example, that we call it. On the Y axis now, we look at the ‑‑ again, the AC50 goes from millimolar up to nanomolars. So the more potent the toxin, the higher this mountain will be.

What's listed on the bottom then are the different features. And what I'm showing you here is that a known compound, etoposide, is shown in red and the response profile for an unknown compound is shown in the blue. That's been coded to us as H25. We're blinded to it, but we can see that it's ‑‑ in terms of its activities, it's not quite as efficacious. But we can see that it has a very similar profile to etoposide.

So that's one way of visualizing the data to look at comparisons between them. And shown in the background there is the gray which you can't really see on the screen here but which is just the maximal response for the entire set. So that's how it just ‑‑ how these two compounds then compare to the response of the entire set, 137 compounds.

So how do we classify the response from these 137 compounds? So those that I mentioned here ‑‑ so that produced over 4500 dose response curves. So, again, we wrote it here. But it is very difficult to apply manual scoring methodology to handle this analysis.

What we then did was take the assay data from the compounds and we used that to ‑‑ with the in vivo scorings to construct a classifier now, a first‑generation classifier to rank the compound toxicities.

So now we can take all of these data ‑‑ and I showed you the analysis, that of the clustering. And we used the principal component analysis to build this first‑generation classifier. And what I can say right now is that had an improved performance over simple cytotoxicity assays. For example, MTT which just shows you a cell loss or cell death assay.

The idea is that is it really worth getting all these extra data. Let me just show you here then ‑‑ so this is a table now of the results of all those analyses.

So if you look at the in vivo toxicity as we rank them from the compounds, the number of compounds then in each of these ranks overall was 137. If you look at a very simple assay, which is 24‑hour cell loss, how well then did the classifier then pick the significant moderate, minimal toxicities in terms of ‑‑ it did fairly well. It got 84 percent.

But if we start looking now at the CellCiphr classifier, which collects all these extra data on the toxicity, then, we did very well. We caught all 100 percent of the significantly toxic compounds, which is really one of the things that pharma wants to know. How do we rank these?

We would then rank those in terms of lower. It's pretty well guaranteed that it will give you in vivo toxicity. So you want probably to rank those lower as you're deciding about what will go into a clinical trial or preclinical, even into expensive animal treatments.

In each case, you can see here that the classifier did a little bit better than just the very simple assay for cell loss in terms of just what kills cells. And overall then, it had ‑‑ this first‑generation classifier had 82 percent accuracy in matching what we know from the in vivo toxicity of these compounds.

So with that said then, what we are working now is developing ways now to get these data to a customer. We can give summaries of the data. We can give all the data. Some actually want to see all the data, and we can give summaries.

Then what we're really working on is this ranking, being able to rank them. So if all they want to know is how it's ranked and then what we're calling a safety index, it'll give them an idea of just a couple parameters where they have a choice of looking at everything but how to actually rank the data and what kind of probability it has of going forward in the drug development process.

So those are just the first Panel 1 Panel 2 that I've showed you. We just showed you just a ‑‑ I'm almost done. This is Panel 3 now, which is rat hepatobiliary and kind of taking the cue from ‑‑ that we've been hearing about quite a bit lately of going from two dimensions to something that's almost three dimensional now by doing a sandwich system instead of single cell overlays, which our Panel 1/Panel 2 is.

Now if we overlay these with collagen or Matrigel, what we have then is really hepatobiliary model now where you get the highly differentiated rat hepatocytes. You get chronic exposure. Now we can do three days plus. We can do a 384‑well plate. And what we can now measure is cholestasis, steatosis and mitochondrial potential.

And what you're looking here in the red is a marker of mitochondrial potential in these cells which aren't really changing that much. And then the green, then what you're seeing is the green is a dye that actually gets pumped out from the intracellular compartment into this hepatobiliary space. And what we can measure then is the effect of compounds on the pump that's actually pumping those compounds out.

So that would be an assay then ‑‑ one of the assays that pharma wants to know about in terms of how quickly are compounds being metabolized and then pumped out so we can actually add compounds, and measure the change in the pumping of the compound, and simulate the pumping of the compound out into the hepatobiliary space, and measure the activity on the pump. So that's just one of the parameters then that we can measure in this new Panel 3, which is a hepatobiliary model.

So just a couple more things that we have in development now is human and rat primary cell lines in terms of developing new panels of cytotoxicity as well as new parameters, then, if we're going to measure new features of biomarkers.

We're working on tissue selectivity panels for neuronal cells and cardiomyocytes. We want to branch out from just the liver toxicity. We focused on that first because that's what pharma said was the most important.

We've done a little bit of work now and we're working on stem cell derived cultures into the particular organ or tissues, culturing tissue‑engineered array models. And as I showed you there, we're moving towards more kinetic live cell panels, like the model I just showed you with hepatobiliary and some of the others that I showed you, the kinetic measurement of mitochondrial potential and also cell motility in general.

So let me summarize then, just very briefly then. So the CSB, cellular systems biology approach is being implemented ‑‑ or we're implementing it, again, to improve the efficacy and decrease the toxicity of leads, clinical candidates and drugs.

And I showed you one example here of the cytotoxicity profiling and just showed how the results of that cytotoxicity profiling demonstrates this idea of cellular systems biology where we can measure and manipulate the cellular system, collect the data, and then really the development of the informatics that allows those data to be interpreted.

And with that, I'll stop. Thank you.

DR. PLANT: Thank you, Ken. That was very nice.

Bob.

DR. NEREM: Ken, over here, your last slide was sort of ‑‑ triggered. Can you say anything more about what your concept is for a tissue‑engineered array model?

DR. GIULIANO: I can't say too much, not that it's a secret, just that we haven't worked on it that much. But I mean that's one of the reasons I think that we are here, to learn more about that.

But right now, what we're working on in terms of tissues are the patient sampling profiling. And that's our initial stage of that. So we haven't really done much on the engineering of those at all.

DR. NEREM: Do you see the possibility in the future of actually being able to do the kind of analysis you're doing on a 3D tissue‑engineered construct?

DR. GIULIANO: No, so that's a good question. And a lot of that then depends on moving these ‑‑ the detection system, which we can already on some of the present detections. For example, the array scan has an apotome, which we can do some sectioning.

So I think that, yes, that would be, definitely, because we already know that we're going to get a different response from going from two dimensions to three dimensions. And, yes, I think that that's definitely in the plans. And we really need to address that sooner than later.

DR. NEREM: Thank you.

DR. PLANT: Keep in mind, we will have a roundtable discussion after this session before the wrap up. And so some of these might be the subjects of further discussion.

MR. DALEY: Mike Daley, Tigenics. It's a very fascinating approach.

And my question is have you ever really validated it? And the validation comes from the pharmaceutical industry is ‑‑ we only hear about the successes. But there are a hundred times more, a thousand times, maybe a million times more failures. And many of those failures fail in the development process of various aspects of cell toxicity or whatever it is.

And so, therefore, what you really should do in a blinded fashion is try to pair up and find out whether or not something that historically we knew failed because of X, Y and Z, were you able to detect it in early high throughput screening processes and predict that it would have failed and saved somebody gazillion of dollars. But the real value to you is, then, now you've validated the process for prospectively predicting an outcome

Have you done that, or are you planning to do that?

DR. GIULIANO: So, yes, so that's ‑‑ there's two parts to that then. What I showed you then, the case study, I guess I needed more detail on that ‑‑ was with CHA Cambridge Health Tech Advisors, we worked with ten pharma companies. And they each sent us 10, 20 compounds that they knew the human tox data on. We were blinded to those, and that was the table that I showed there where we could actually predict those pretty well. So that's one thing there.

So we have done that, and we're actually following up with some of those to do more assays for them because we did get a fairly good result there that showed that our approach was ‑‑ at least in this first‑generation classifier was better than just looking at simple toxicity assays.

But we're doing also what you exactly said, is taking some of these fallen angels or other compounds that we know have failed and starting to run those through as a demonstration of our assays and these profiles and help ‑‑ using those libraries to actually develop the next set of profiles then.

So the answer then is two where, yes, we have begun to validate it, and we do have good results in that. And we're working on further validation, so we can go forward and do prospective predictions using new generations of this classifier with new profiles.

MR. HICKMAN: Hi, Jay Hickman, University of Central Florida. I'm a little confused. So, in fact, if I understand you, you've basically identified a number of markers for cytotoxicity, and then you're basically taking a fingerprint with those markers and comparing it to a database.

Now, at other times you sort of alluded to you're relating that back to pathway information inside the cell. How much of that can you do, or really is it more kind of a cytotoxicity fingerprint based upon what you think are known markers for cytotoxicity?

DR. GIULIANO: So that's a good question. And what I didn't talk much about is our cellular models of disease, where we have specific biosensors and other array of biomarkers and features. And what I'd like to see is us marry the two together.

What I showed you were really what we thought were some cytotoxicity or cell stress features that we could measure biomarkers. And those are what's in the panels right now. And that's Panel 1 we developed with Millipore, and that's a kit that pharma can actually use right now.

But what we want to do is take some of the biomarkers and biosensors and things and manipulations that we use for cellular models of disease and marry those with the cytotoxicity and make an overarching big cellular model of disease and toxicity. So we are expanding the biomarkers that we have for the cytotoxicity outside of what we think are just toxic, but more understanding, what is the compound's effect on the cell as a system?

MR. HICKMAN: So you're trying, in effect, to marry up eventually the Cellomics platform with the cytotoxicity platform?

DR. GIULIANO: Well, the ‑‑ or whatever measurement platform you wanted ‑‑ or whatever ‑‑ for example, it could be the INCELL or it could be (inaudible) ‑‑ but it's measure up our reagents and assays with our informatics and really the marriage of those two to make the toxicity more predictive as well as the cellular models of disease.

MS. DONG: Hi, Jiyoung Dong from the FDA Center for Devices. I was wondering if you could comment on sort of the pros and cons between an assay that could give you a lot of information that's very complicated and maybe very sensitive, but requires a specialized user or a lot of training versus some other system that maybe gives you sort of a medium sensitivity, a little less information, but is a lot more accessible, or can be used by people who aren't really trained or little to no training.

DR. GIULIANO: I guess I say so in front of everything. So that's exactly what we're grappling with in terms of ‑‑ pharma for a long time has been using simple assays and they're getting more and more complicated. And the burden is on us to show ‑‑ to tell them. That's what they ask us. What are these new data going to tell us? We have these fairly simple assays. Can you do it better?

So the idea is we keep it as simple as we can. But there's always going to be a space for these very simple assays to give you a yes‑no answer. Where it shows here on the table, they are fairly predictive. But if you want to go to the next step ‑‑ because the drug development process is so expensive what we've done is gone to that next step. And it's because they are more complicated. We can either transfer it to them or offer it to them as a service, which a lot of them have done that because it is a little too much for them to develop in the lab.

So I would say that there's room for all of these assays. It just depends on the amount of information that you want to get out of it and how you want to use it in the end.

DR. PLANT: If I could interject, too, I think that's a really deep question and something that we ought to bring up again this afternoon and discuss in some detail.

DR. NYBERG: Scott Nyberg from Mayo.

Have you tested a common drug like acetaminophen, which is safe at low doses but toxic at high doses? I'm curious to know what your system would show.

DR. GIULIANO: Yes. So we have, and then we need to go, like you say, up to the millimolar amounts to start showing toxicity in this, as well as some other compounds where it's really hard to show some toxicity. And that's why we need to go actually to that 72‑hour time point.

But using that as a lesson, that's helping us design some of these other cellular models or toxicity assays, where we can start seeing more pathway‑specific or something where we can show those drugs that are safe but do have some effect on the cell that we can actually measure then as a baseline.

DR. NYBERG: So you do dose responses as part of the testing?

DR. GIULIANO: So as part of the testing, yes. So we do a ten‑point dose response curve in duplicate, and those are what give rise to the AC50s. So we do go across three or four orders of magnitude of concentration of each of the compounds.

DR. PLANT: Okay. Let's thank Dr. Giuliano again.

All right. So our next speaker is going to be Dr. Lani Wu from UT Southwest Medical Center. Lani comes to biology by way of pure mathematics, computer science and electrical engineering. So this might be something entirely different.

Dr. Wu graduated from University of California at San Diego with a doctorate in mathematics and left mathematics faculty at Princeton University to join the fledgling research division at Microsoft, where she led a number of research efforts, including projects in video compression, semantic search and speech‑noise separation for multi‑microphone input.

Prior to arriving at UT, Dr. Wu was at Rosetta Informatics and was a fellow at the Bauer Center for genomics research at Harvard University.

Today she has two main research areas in her lab. She's interested in understanding network design principles that lead to cell polarization, and she does this by combining mathematical modeling and microscopy‑based experiments. And then second, she's interested in understanding how single cells respond to perturbations. And, again, this involves combination of populations of cells and high throughput immunofluoresence microscopy and high performance computing.

So let's welcome Dr. Lani Wu.

DR. WU: Thank you to the organizers for inviting me here. Before the invitation, I have never thought I would be talking in the tissue engineering workshop.

Coming from mathematics, engineering, we were completely captivated by the complexity of phenotypes the microscopy images captures. For us, when we look at it, it captures the cell morphology. It captures the intensity and the spatial organization of a readout. It also captures the individual cellular response.

We're interested in understanding where we can identify the cellular states by the observed phenotype. If you look at these images, these images are drug‑treated HeLa cells stained with DAPI, PR, and P30A.

What I can see visually ‑‑ just my own interpretation, but you can tell me what your interpretation is ‑‑ is that drugs are single mechanisms induced to cellular response. And drugs of the similar mechanisms induce distinct cellular phenotypes.

So one of the questions we like to ask is that can I look at the cellular response and try to predict the perturbation of mechanism? In our lab, our research is not really looking at toxicity or looking at large data ‑‑ it's not a pharma type of work. But for our lab, the conceptual model is starting from a biological network and monitoring the network components and look at the translocation and its spatial distribution.

What we would like to do is represent ‑‑ within the individual single cell, represent the phenotypes we see mathematically as a point in a high‑dimensional feature space. And I will talk to you about that a little more later. But conceptually, that's how we're thinking about it.

If you perturb a cell with a perturbation, ideally, what we'd like to think is that it will get you a point on a high dimensional feature space at a different point. If you have similar perturbation, it should get similar points that are close together. And if you will have distinct perturbation, you would get points in the representation that are far away from each other.

And in terms of perturbation, you can imagine we can use drugs or things that are like hormone, growth factor or anything that you guys have been talking about in this workshop such as substrates, collagen and all this different perturbation you would like to put on a cell.

Today what I would like to take you through is the journey we have gone through in the past few years in trying to understand how we can get any information from large quantity of microscopy data. And since this is the first time we're going through it, what we decided to do is try to do a perturbation that we can easily control and that is using drugs.

I'm going to tell you two different approachs that we have used, and they're slightly different, and they are both published. And for both of the studies, we use a same large dataset of microscopy data. We start from 100 compounds. And these 100 compounds were taken from 15 diverse functional categories, also include about three unknown drugs.

In fact, at the time of building this data, our collaborator on purpose blinded ten drugs. And when we think about it now is that there's really no reason for him to blind the drugs on us because we did not know what the drug's about anyway. And when we treat the drug, we use 16 titrations with 3, 4 dilutions.

Then in terms of markers, we just decided that we're just going to take 11 marker, antibodies that there was easy ‑‑ found in the freezer of our collaborator. And, of course, that is complicated to try to multiplex. So we decided of putting three markers, three antibodies in one set on a single cell and creating five marker sets.

Then we treated ‑‑ what we do is we put HeLa cell on 384‑well plates. And the reason we do that is because if you think about, we're doing 1600 compounds. That means 1600 experiments. And as a small lab like ours, we cannot afford to do anything else. So we treated a cell ‑‑ we cultured a cell on a 384‑well plate, and we treated them with drugs and fixed the cells after 20 hours of drug treatment.

Then we spin them with different antibodies, the five different marker sets I have outlined here. Now, what we have done is that we capture from ‑‑ in each well, we acquired nine images because we just want to get enough cell count, like Anne was talking about. We need to have enough cells to be able to do your statistics.

So from there, we acquire in the order of about 100,000 of images. In each image, we circumvent out (inaudible) individual cell region until we got about tens of millions of cells.

And the question that had been raised in this conference is what are the phenotypes should we extract, should you monitor. In fact, that this is not a easy question. And because we know ‑‑ with our experience in microscopy, we know that if we want to implement a particular feature that biologists see is important, a lot of times these particular feature is very, very hard to implement, requires a lot of time.

And not only that, the problem there is if you only go in with the features you know, you will not find anything surprising. So that's not really where we want to go as the first time through.

So what we decided to do is do an unbiased approach, and we were good at malloc. So we decided to capture anything malloc can give us. And also in the ‑‑ and today we also ‑‑ adapt some other tools that our bio ‑‑ Murphy at Carnegie Mellon have developed.

So in short, we have been able to capture four different types of features. And one type of feature is morphology. So it just tell you how big the cells is and how round the shape is. And the other one is texture. And in a sense, it just tell you the pixel pattern. And then the other one is the statistic of the pattern. And, of course, the very important feature everybody looks at is the intensity. So, for example, we can look at total intensity in the cell or total intensity in the nucleus.

With this we were able to extract features. However, this does not really help us to interpret the data. So the question is now what.

With this in mind, I'm going to take you through our two different approaches. The first approach we call the univariate approach, and the reason we call it univariate approach is because we're going to look at individual features independently.

So, for example, with one marker one, you can extract one feature. And with this feature, we're going to look at this feature across all the population ‑‑ across every cell in the population. And then we can build a population's statistic for a control population.

At the same time, we can also build another population's statistic for the drug‑treated population. And then we gather more features in the same marker. Remember, we're doing it individually. And so we can extract the population's statistics. Then we can do that for the other two markers and extract more features and compute the population statistic again.

With this, what we decided to do is that the ‑‑ if the drug‑treated population is shifted to the right ‑‑ and we're just going to mark it right. And the bigger the shift that we're going to make it, the redder it is. Similarly, we're going ‑‑ if it shifts to the left, then we're going to mark it green. The bigger the shift is, we're going to make it greener. And with this, I'm going to put it together with some real data here.

So you can look at this. This is a camptothecin dosage data. And what I'm pointing out here is the DNA intensity. So on the bottom, you have low dosage. You can see an obvious ‑‑ visually you can tell maybe it is G1 and G2 cells, two populations. And then when it goes to higher, the G1 cells disappear and you only see the G2 cells. And that is consistent with what we know about camptothecin effect on G2 arrest. The population statistic I'm talking about is cumulative density function. What it means is that the function here, it goes from zero to 1. And if you look at any X axis ‑‑ and the Y axis tells you what is the percentage of cells that have this feature with value less than the X axis at the point in the X. And so with this, we see that we have a shift when it goes up to about ‑‑ starting at the third concentration and going up. And that is a feature that you can easily get from FAKs.

Now, I'm going to look at a feature. It's a little bit harder to get from FAKs. That is anillan, a cytoskeleton ‑‑ cytokinesis marker. We're going to look at the average intensity. In order to get an average intensity, you need to know the cell size.

With this, we see a shift in the beginning. Even when it's green ‑‑ I mark it green here. It does not have much shift. But the real effect comes much stronger, starting from the fourth concentration.

Then we look at another feature. For example, the p53 cytonucleus intensity, this ratio here you cannot get from FAKs. So this is a feature that is very specific microscopic images. And the special thing you see from this picture here is that the intensity ‑‑ the effect really comes out much later in the high dosage.

Putting all this together, thinking about the 11 markers I have talked to you about, we're going to put it all together. So in this study here, we did a very easy ‑‑ we did about ten features for each marker because we just wanted to test whether this is going to work. And so we just did whatever malloc gives us. We just do it easy without any effort.

So, again, the first area on the top is the recap ‑‑ what we call profile on the DNA intensity. So it's stuck on pretty much black to red and then dies off a little bit. So it's anillan feature and a p53 feature.

And so what is this telling us? When we see this, we thought we've got something here. The reason we've got something here, well, for us is because we see that the drug effect does not really come out at a very low dosage. And if you look at it carefully, the first effect comes out at the third and fourth dosage right away. And then at a later dosage, you can kind of detect by eye where the second dosage effect comes out.

So we thought this was nice for just camptothecin. So we decided to look at it for all our 100 compounds. And for this presentation, I'm going to put the first 50 compound here.

So for each drug, I put ‑‑ there are three fingerprints here. The first fingerprint is a replicate one. We did two replicates. And second one is replicate two, and third one is the average profile. And you also see some white here. We mark it white if the cell count is very low or the two replicates do not really satisfy our statistic significance similarity metric.

So here, I was very pleased to see this picture because there's no way I could have done so ‑‑ all the hundreds of ‑‑ thousands of images to get this sense of what the drug is telling me, what is the drug response. And not only that, what you see is that the drug response is complex. You see lots of white, red features ‑‑ profiles and you also see some very green profiles and some mixed. So this is a really nice way to give you an intuitive utilization of what the drug's doing. It is for us.

Now, come back to my previous question, does drug or similar mechanism give you similar response? And for us, on this mathematical representation, we wanted to be close together in the representation space. So what it means is we want it to look similar.

So what I have done here is I sorted a camptothecin drug response one more time on the features side. I found it very green and very red. That's the first figure. And I keep the same cell order for the feature, and then go across at each rack on the screen here.

So what I can see right now visually is that drugs are similar mechanism. I get similar profiles. And drugs are distinct mechanisms. I get very distinct profiles. At least I can see it by eye.

And, for example, if you look at protein synthesis, all these drugs are supposed to be the same, in the same functional category. But if you look at it carefully, they're slightly different. And, in fact, when you go in and look at the mechanism, they are really ‑‑ these drugs are hitting the cell at a different spot even though they are all categorized as protein synthesis.

So like most computational persons, once you have something like this, you go and try the most obvious thing in malloc again, hierarchal clustering. So in this image here what I show you is not every ‑‑ from the previous image you can see that not every drug that we have done shows some response. So what we've done here is we take the 60 drugs with the highest response, with some technical response.

So on the left‑hand side, this is our drug list and on the top is the functional categorization. And so when the drug is in each functional categorization, we make it black. And the blue ones are the ones that are cut up with a blinding fungus. And then the big ‑‑ there is we did a ‑‑ by clustering out a hierarchal clustering and trying to put a profile, they are similar together. And you can see that they essentially group almost by the functions. And so this was very encouraging for us, and this was the first approach.

So in the second approach, the problem with the previous approach is that we're looking at individual features, one and one time. There's no way in that previous approach I can see the correlation changes among multiple features.

So instead what I'm going to do right now is I'm going to look at all features in one cell together. So going back to the conceptual model of the high‑dimensional feature space again, I'm going to pull an individual cell. I'm going to extract all the features in one time and map it into one point in this high‑dimensional feature space. As a result, I can have many dots that represent the control population. And, for example, I can have the blue dots represent the control population.

Now, what can you do with this? I know that there are many different multi‑variate approaches out there. But for us, when I have two populations, the only question I'm trying to address here is do I see a drop response. And that is the question we interpret, is how can we separate them out.

So one of the most obvious algorithms out there is the support vector machine. What that means is we're going to a best separate plan to separate these two populations out. And so if you think about if you have a control population and then have a drug‑treated population, if the drug is very low, what do you expect this to do? This two population is going to mingle with each other. And no matter how well you can separate out these two populations across sufficient accuracy, it's going to be really low. That is 50 percent because you cannot separate one or the other.

So what this gives us is the separating plank in a classification accuracy. So this is our representation here, and to put it into real data again, so what we're doing here is looking at drug dosage data. And for every dosage, we're going to look at individual cell and put them into a point on high‑dimensional space. So we will have a control population and drug‑treated population. And we try to find the best plan and the normal vector.

The hyperplane can be described uniquely by no more vector and also the classification accuracy. So when you go up the high concentration, you see that the drug‑treated cells and the control cells can be more easily separated. And then you get another ‑‑ so when you go up, then you get all the profile. There's no more vectors across the dosage.

The no more vector, it just tells you really the direction of thickest phenotype change. That's what it's telling you. And because it's a direction ‑‑ so then we can cluster the no more vectors by how close they are pointing to the same direction.

So with that, we can ‑‑ on the top of this graph here, we can cluster into ‑‑ for camptothecin, we were able to cluster into three different directions of clusters. And we call it a zero effect, one effect and second effect. And the reason why it's zero effect is that we also look at the classification accuracy. Remember I told you that in the very low dosage, the low dosage population should be very similar to the control population. And we want to make sure that the accuracy is high enough before we can see that this is an effective drug effect.

With that, we were able to look at the low dosage and the no dosage effect, and the first dosage and the second dosage effect. And that has been documented in the literature that camptothecin has two different effects.

So when when our postdoc was doing this, then we say okay, go ahead and see what you can do with multiple drugs. So he applied his algorithm and get different profile for different drugs. And he handed us this high (inaudible) clustering.

And what this is saying, there are ‑‑ I don't remember how many drugs he can count. But the functional category has been indicated on the parentheses part. So HH, they're all the same functional category.

This tells me that this is a perfect functional ‑‑ this can classify the drug essentially perfectly. And what it is telling, are you sure this is right? Did you just do this in the PowerPoint? And it turns out that he did not. He did not cheat. And so we were very pleased with the accuracy.

With this high‑dimensional representation, the profiles, one of the good things about it is that the classification accuracy was really high. But one of the bad things about it is it's really hard to look at it, to visualize it in high‑dimensional feature space.

So to accommodate that, we pick the three ‑‑ the most important features space and then put all the drugs, and then visualize them in three dimensions. And what that allow us is go in and look at a drug you're interested in by the dosage effect and then look at what are other drugs that is close to it. So this can also provide ‑‑ even though it's in a high ‑‑ it's really done in a high‑dimensional feature space. It can also be visualized in a lower‑dimensional feature space to provide you some insight into the drug mechanism.

Now, to come back to the original question that had been raised by many people in the workshop is what are the phenotypes to capture. Since we go through the approach, our bias ‑‑ collecting dataset, unbiased features. In fact, in the study I just talked to you about, in each individual cell we captured about 300 features. And there's no way I want to go through these 300 features in every cell. And so what we decided to do is that we're going to drop the features systematically and see what our accuracy would still be the same. So what we've done is ‑‑ this is the graph to show you that when we drop the features to about 20 features that we still get reasonable high accuracy.

In fact, this is a very strong result in a sense that this is telling you for almost every marker that you do, you probably only ‑‑ especially in our dataset. I cannot speak for your own biology data. But for our dataset, we only need about 20 features per marker set. And the features are different, depends on which marker set you pick. They have different ‑‑ and the importance has been summarized in this figure on the left here. And it also depends on what are the drug mechanisms you are trying to uncover.

So to summarize what I just talked to you about here is that we have gone through two different approaches. One is the univariate approach and one is multi‑variate approach.

And the pros on ‑‑ the good things about the univariate approach is that it's very easy to ‑‑ I can ‑‑ I just showed you earlier. It was one strike ‑‑ can give you a whole big overview of the drug response. It's very nice. And also that you can ‑‑ we were able to combine all the different marker sets together. However, it's not as sensitive. But in the multi‑variate approach, it's very sensitive in terms of drug mechanism clustering. However, it's much less intuitive. So you cause much more complication of power.

Since I've told you earlier, our lab is really in a basic research direction. And so what we would like to do ‑‑ and found these different approaches that I have shown you about ‑‑ is we want to look at specific biologic networks. Networks that we are looking at ‑‑ biological systems that we are looking at right now in the lab includes understanding chemotaxis, primary human neutrophils, the insulin response in adipocyte, and drug response in cancer cells.

We are also looking at multiple markers. And we need to pick specific markers for their own pathway and extract the exact phenotype we're interested for our biology questions, and going through the perturbations that we need to do.

And this work that I have done by the lab ‑‑ with the great people in the lab and also my long‑term partner, Steven Altschuler, and the univariate approach was developed by Mike Sacks, and the work in multi‑variate approach was developed by Lit‑Hsin Loo.

Thank you. Also I would like to acknowledge my funding source. They have been really helpful and our collaborators. Thank you.

DR. PLANT: Thank you, Lani. That was really interesting. Anybody have any questions?

DR. TUAN: Rocky Tuan, NIH. That was a very fascinating talk. So it just brought some things to my mind.

To some extent, you took a lot of very cell‑specific data and you homogenized them. I mean ‑‑ and you analyzed them and you homogenized it at the end. And I was very fascinated by you saying that when you start dropping the variables that you end up actually ‑‑ were able to use ‑‑ I think you said 20 or something.

So just brings back some memories of sort of classical biochemistry in the early seventies. Christian de Duve, of course, won a Nobel Prize for subcellular fractionation, which could be extremely technologically very specific in terms of recovery of mitochondria, membranes and so forth. Very clean, I mean, so I'm just wondering ‑‑ and that's also homogenizing. It's literally homogenizing. You take a tissue, and you grind it up.

So I'm just wondering if you were to take just a few of the things that you're looking at, namely, whether something is associated with nucleus or cytoplasm or what have you or cytoskeletal things. If you were to just take some of those and go take a look at response to drugs, whether you were able to, I guess, validate using the classic more biochemical approaches.

DR. WU: Okay. I'm not quite sure I understand your word about homogenize it because we did not. All that's ‑‑

DR. TUAN: I guess that's not the right word. I mean taking all the data and then crunching to get a plot that you show us.

DR. WU: Okay.

DR. TUAN: The same thing with taking a cell part, I mean we have all these fractions and you do all these enzyme assays and whatever. At the end it's just a plot. It's some bar little graph. That's what I meant. I don't ‑‑ whatever the term is.

DR. WU: Right. But I think that one of our things is different than for the microscopy approach is because when you see a plot you can go back to the original data and original features. And it's different than Western blot. Everything was lost.

DR. TUAN: No, no. It's not western blot. I'm not talking about Western blot.

DR. WU: Right. But I'm saying that when ‑‑ the main thing about this plot, the way ‑‑ at least for us, is we look at this plot. Oh, this is some interesting features. And I go back to the features, and I can even go back to where are the population of cells, the images, and go back to the origin to go forward. And so that's one point that you raised.

Then the other point is do we look at actin or ‑‑ for example, we have a marker with actin microtubule. And, in fact, you said markers that we would have very good detection ability, too.

DR. TUAN: But still my point is that if you were to go back ‑‑ no, I agree that the power of the analysis is amazing, of course.

I mean but if you were to go back to a cell, that cell naturally can't tell you anything, right? It's the population that tells you the information. That cell alone is one component inside the dataset, which is valid and quantitative and all that. But that one cell doesn't tell you anything. You need the 10,000 or 10 million or whatever to give you the ‑‑ so that's what I was just thinking.

I mean, again, biochemistry is old‑fashioned and what have you. But I'm just curious whether you have taken any of these spots and just kind of use ‑‑ to some extent almost prove whether the old biochemistry was worth anything.

DR. WU: Oh, okay.

DR. TUAN: Just a comment, not a criticism.

DR. WU: I mean that's a really good comment. That's a new work that I did not talk about today, is talking about analyzing heterogeneity ‑‑ is like what everybody have been talking about, no cell is the same.

But the question there is every cell is different. And we have just come out with some algorithms and some analysis to show that it turns out that if you really look at ‑‑ let me rephrase it. The question is do you ‑‑ if you really want to analyze the heterogeneity, do you really need to think about every cell is in their own state or not?

This question, I think it really depends on what is your question you want to answer, what is your success metric. And in the study we just did recently using the same drug data again, the success metric there is really classified drug mechanism.

What we have found, surprisingly, is that it turns out that we only need about four different populations of distinct phenotype to really quantify the drug mechanism. What that means is that even though the complexity of heterogeneity looks daunting, if you look at it the right way, maybe the complexity is not as high as you would expect. And it will provide you a better insight on how to attack this problem.

DR. PLANT: If I could just interject one thing.

I think, Rocky, one of the questions you're asking ‑‑ and it's a really important question ‑‑ is are the cell‑based assays validated with other biochemical methods to know that you're measuring the right thing. And that's where the accuracy question comes in, right? Absolutely, yeah. And that's maybe a separate question. I'm sure that you guys did some of that. But that ‑‑ and that's one of the things that the ASTM committee is very concerned about as well.

But I think also one of the significant things that might be part of your question is that if you're looking at ‑‑ it might be required that somebody look at a whole bunch of data in order to decide which are the most sensitive markers that are going to be most important that, say, people in tissue engineering labs would then want to focus on.

MS. LUMELSKY: Nadya Lumelsky from NIH. Sort of a flip‑side of the same question, talking about heterogeneity, in reality samples of primary cells or tissue‑engineered samples, they're inherently heterogeneous, functionally heterogeneous.

So would your method allow to identify distinct functionally different populations and study them in this multi‑variate parameters?

DR. WU: Let me quantify this. I think what I have shown you today is what can be done. And when you want to address a different question, does not mean that that's the way you should be doing it.

MS. LUMELSKY: But it would be nice if that could be used.

DR. WU: Right. In fact, that we have ‑‑ maybe I can talk to you later about it. We have a different project that have ‑‑ to kind of address this question, but it's really preliminary. So I will probably prefer to talk to you in private about this.

DR. BERTRAM: Thank you, Dr. Wu. Fantastic presentation. Very exciting.

Two questions, one philosophical and one specific and technical. Do you have ‑‑ yesterday Buddy presented and said a little bit of heresy, which was fantastic. He said the way we get our principal component analysis to work real well is we actually go in and we filter it and then we analyze it. By the way, I don't think that's cheating. He said it was cheating. I happen to disagree with him. But be that as it may, I think he's giving us some tremendous insights.

The tremendous insight is this and the question therefore is this: have you considered integrating human intelligence as part of your automated algorithms in order to give you new insights? And if yes, I'm curious how you're actually doing that pre, post and where the integration occurs.

DR. WU: Okay. So that is the part where we're doing human neutrophils right now. And in that particular project, we have very specific questions we want to ask like what is a front and back coordination. And so we have gone in and are really trying to implement the right features to capture what we want to capture.

DR. BERTRAM: I look forward to that publication.

DR. WU: I look forward to it, too.

DR. BERTRAM: The second thing, which is much more technical, it's a little bit of a spin‑off of Rocky's point and maybe it was made previously. One of the things I noticed, and it was probably just for the purposes of presenting here. But you have a tendency to have people focusing on the responders.

In a tissue engineering situation is those that are not responding ‑‑ as a pathologist, I'll say I use the term a "stroma." A stroma's supposed to stay static so that the cells that are responding ‑‑ now, the stroma's dynamic. It's constantly under remodeling. So even that isn't totally static.

But my point is, from a technical level, can your algorithms distinguish those cells that are also stable, if you will, and not changing?

DR. WU: Not in the current two approaches that I have described, but a new approach that hopefully is coming out soon, that it will be able to address that.

DR. PLANT: Was there another question?

And there will be, of course, another question and answer period later.

Okay. Well, let's thank Lani again.

We're going to take a break now and reconvene at 10:30. So it's about a 15‑minute break.

(Whereupon, a recess was taken.)

DR. PLANT: Okay. Time to get started with the next part of the session. We're just running just a little late, but I think that Dan has a presentation that mostly will work now.

So it's a pleasure to introduce Dan Martin, who's our next speaker. Dan's assistant professor and proteomics facility director at the Institute for Systems Biology in Seattle and also a lecturer in hematology and oncology at the University of Washington.

Dr. Martin received a bachelor's degree in mechanical engineering at Cornell and an MD at Yale University, and completed an internship and residency in internal medicine at the University of Colorado in Denver.

He joined the Clinical/Research Fellowship in Hematology and Oncology at the University of Washington in 1998, and after a year of clinical training began basic science research under Rudy Eversol at the Institute for Systems Biology.

And his research has focused on the use of proteomics methods for analysis of the androgen receptor complex in prostate cancer. In addition, he's developed a program to identify biomarker candidates from cultured prostate culture cells and evaluate the presence of these biomarker candidates in the serum of animals xenograft with the same prostate cancer cells.

And so it's a pleasure to introduce him today and have him give us a talk on mass spectrometry‑based proteomic applications for cell/scaffold products.

DR. MARTIN: Thanks very much. Thanks for having me, and please bear with me because of the sliding graphic. That wasn't one of my PowerPoint 2007 graphics. That's the projector, I guess.

The ISB where I work is in Seattle. And in case you haven't heard recently, this is the beautiful Seattle skyline in the summer. This is really what we've been having lately.

I'm here today from Seattle to talk about mass spectrometry‑based proteomics and potential applications to cell/scaffold products. Keep in mind, I'm an academic. I have no financial disclosures with regard to biotechnology products.

And the approach I'm going to take today is one that's more of metrology. The aspects of proteomics that I'm going to try and explain are how we use this technique to measure. And hopefully, you will use your own creativity to figure how will it work for me.

So the first question is what is proteomics. One definition and I like is proteomics includes the identification, quantification of proteins as well as their localization, modification, interactions, activity and ultimately, their function. That's a very broad definition.

And just to make one point very early on, the proteome versus the genome, it's important that we distinguish the features. And the one that I would really allude to is that for proteomics as I'm going to describe it, we do not have PCR. There's no amplification in the study of proteins.

Obviously, the genome is static and the proteome is quite dynamic from cell to cell as we saw in a lot of the images we've seen. It's quite heterogeneous from a monoculture from even neighboring cell to neighboring cell. And there's a tremendous variability in the amount of protein one might see from sample to sample.

So proteomics is a lot of things to a lot of different people. I'm talking about mass spectrometry‑based proteomics, but proteomics might mean two‑dimensional gel electrophoresis to somebody, mass spectrometry to me, protein chips or yeast 2‑hybrid, phage display, antibody engineering. It means a lot of different things to a lot of different people. So I will not say that proteomics isn't any one of these things. I will just tell you about what it is to me. And I'll tell you what it is and what it isn't.

So it is a highly powerful tool for protein identification and quantification, and it's complementary to other technologies and analysis methods. And what it's not, it's not magic. It's not going to give you all the answers. It's not like someone's going to walk in with a little device and take your scaffold, or whatever it is you're studying, and go do, do, do and it's going to have all your proteins and everything in a dramatic list for you.

It's really not all that simple. But it's not that hard compared to some of the other stuff in multi‑dimensional space ‑‑ I honestly can't say. It's not hard, but it's not that simple.

And it's definitely not that cheap. It really depends on the scale of money you think about, but these instruments run on the average of half a million dollars and up, excluding service contracts and the FTEs to operate them.

So mass spectrometry basics 101, what can we measure? We can measure proteins in mixtures. We can do quantitative analyses of protein expression. We can measure post‑translational modifications, such as phosphorylation which is a challenge, and the field is advancing in that aspect. And glycosylation present or absent, not the nature of the glycan at least in mass spectrometry‑based proteomics.

There's a whole field of glycomics. There's all the omics that you would like. One of them is glycomics, metabolomics, lipidomics. We can also measure protein interactions.

So I'd like to give you a flavor for what mass spectrometry‑based proteomics can do for you. And I've talked to a lot of people at this meeting, and I've gotten the impression that there's a room full of very, very intelligent people, but many of whom really don't know all that much about proteomics. So I'm going to focus my talk on the proteomic neophyte. And so here is basically what proteomics can do for you.

If you have a protein gel in a metaphorical way, you can put a name on every band on the protein gel that you can see. And you can also put names on bands that you can't see. So this would be then a theoretical experiment. So this might be your scaffold.

Basically, let's just ‑‑ in this case, let's just talk about cells in culture. Ultimately, you want to take these cells, and you want to get them into the mass spectrometer to figure out what proteins are there. What you do then is your take your cells and you make proteins. This is proteomics. We make proteins, which I'll say is a string of amino acids with a molecular weight of greater than 10,000.

And then you take the proteins. You digest them into peptides. And we typically use trypsin, and I will elaborate a little bit on this because trypsin is an enzyme that cleaves after arginine and lysine. And that will become important. K is lys, R is arginine. And the reason that's important is that both the ends of these peptides now have amines. And these amines support the ability of the groups on either end to carry a charge.

And so ‑‑ now let's just shift gears to say what mass spectrometry primer ‑‑ what does a mass spectrometer measure? It measures charge to mass ratio. So that's ‑‑ we call that M/Z. So you'll see that in a number of slides here.

So you can't measure something in a mass spectrometer unless it's charged. And the basic mass spectrometer configuration looks something like this: you have to ionize your material to make it charged. You have to somehow do a mass separation define ‑‑ separate your analytes according to mass and then you have to collect the ions or count the ions.

So I'm just going to touch very briefly on ionization. So this is electrospray ionization. And so here we have ‑‑ this is a very, very small emitter here. This is a very tiny fuse to let the capillary ‑‑ the diameter of 300 microns into which you typically pack a reverse phase chromatography resin. You put peptides such as this into this emitter, and you put a large voltage between the emitter itself and the mass spectrometer. And this induces a cone. That's what this picture is of.

So basically, your peptides are stuck in a little droplet of liquid. The solvent is charged because of the potential. And as it moves towards the mass spectrometer, this solvent evaporates. And as it evaporates, you ultimately wind up with a higher charge density and the molecule essentially ‑‑ or the droplet explodes when the charge gets too high. And in the end, you wind up with a peptide that is charged. And the charge lives on either end. That's why I drew it this way.

So this is a type of mass spectrometer called a triple quadrupole, and it's the version I'm going to use just to explain mass spectrometry. So there's three of these triple ‑‑ there's three quadrupoles which are four poles arranged around a center, and they're in tandem. And here's your ionization source. Here's your mass separator. Here's your ion collector. It's effectively ‑‑ this is a mass spectrometer.

This quadrupole looks something like this. This is a theoretical set‑up. This is the actual set‑up. And basically, there's a series of ‑‑ there's voltages placed on opposite rods. I don't really ‑‑ you don't need to know the details of this. But basically, you have electric fields that guide the movement of the ions.

And so what that means is when the charged peptide is in this region, it will behave in a certain way. And so I've described it in such a way that it's stable. It's able to stay in this quadrupole region, and it will make it all the way to the end of the quadrupole as it travels from left to right.

You can also set up the electric fields so that it doesn't produce stability for your charged analyte. And when you try and move this analyte down the length of the quadrupole, it hits one of the rods. And when the charged peptide hits a rod, the charge goes to the rod and you still have the peptide there, but it's not charged and you can't measure it anymore.

So the way to think about this without getting too involved is that this system can act as a filter. It's like the tuner in your radio. You can tune it to a certain charge to mass ratio. And so this is another analogy of what I used to do as a kid with the old dial radios. And what you would do is you'd just spin the dial as fast as you could. And as it'd hit each station, you'd hear moments of noise that represent the signal of that radio station.

Now, you can imagine this is the same exact thing that's going on with the mass spectrometer. And it's not just one peptide that's going in because if you digest a lot of protein, there's a lot of peptides. So there's maybe hundreds of peptides. And for a moment, your mass spectrometer is tuned to each one of these peptides. And you get the signal, and you measure a histogram, effectively, of what's there. And so this is how you would say these M/Zs, i.e. peptides, are there.

And so there's another thing, though, that you can do. And this is why there's three of these quadrupoles together. If you have a particular peptide, you can excite it such that in moving left to right, it goes faster. You put energy in.

And so you create a situation where there's a kinetic collision between the peptide and some nitrogen atoms. And so what happens in that collision is the peptide will break. And if you tune the energies right, what happens is the weakest bond, or the most likely to fragment bond, is the amide backbone, none of the side change fragment.

So what you get is this peptide that is ultimately the population ‑‑ there may be thousands of them ‑‑ can be broken in this fashion that any one of the amide bonds. And just to remind you, there are 20 amino acids with 19 different weights. And so that will become important in just a second.

So this is a schematic of what happens. If you have a particular peptide sequence, this is what you would get. These are all the possible bond breakages, and you get pieces; where if you start from the left ‑‑ and remember, this is also why it's important if there's a charge on one side, the amino terminal and the carboxy terminal both halves of the broken peptide have a charge. So they can be measured because they will have a charge to mass ratio.

Now, I just want to redraw it for you. So we had ions on one side. And I labeled one B, and I labeled one Y. That's the nomenclature of the field. But I'm just going to redraw the Y ion for you. So I'm just going to turn it around a little bit. And it's the same sequence, but since we work ‑‑ we're in a left to right universe. I want to redraw it so that this is all the fragment possibilities redrawn.

So what does that mean? What that means is if I actually looked at all the fragments, you can generate a ladder. And the ladder represents combinations. So you can see the N. You can see the NS. And so here's the N. Here's the NS. Here's the NSG. And these are the B ions moving this way. And so it stops at a peak. So here's NSG. Here's NSD. And then here's RGAISG. And it stops at this big, tall peak.

So effectively, you get a ladder. You can think of it as a ladder. You can think of a fingerprint. But what this means is because there's 20 amino acids, I can tell you if you look over here at the V, there's a hundred units between ‑‑ that defines what a V is. And every amino acid has a different separation.

So ultimately, with a good spectra, an intelligent person can sit with a calculator and say okay, I think there's a V here and maybe a V here. And you can deduce the sequence of the peptide that you broke. And so this ultimately is the fundamentals of mass spectrometry, deducing the sequence of a peptide.

So this is how we do shotgun proteomics. And shotgun proteomics is just a high throughput way of doing what I just showed you. You would have a group of peptides that go into the mass spectrometer. And you just turn off the first two of the three quadrupoles. And you take the third one, and you do that radio scanning trick I showed you, which generates this spectra. And you say okay, this is what's there.

We call this an MS survey scan. These are the peptides that were there. And then you decide well, I'm interested in this one over here. And when you decide you're interested in that one, you turn on the first quadrupole. And what it does is it filters just like the ‑‑ I showed you before. And you turn on the second quadrupole, and it accelerates and breaks that peptide. And then you take the third quadrupole, and you do the radio trick where you scan the whole sequence. And lo and behold, at the end you get the spectra. And, again, you now have the spectra. And you can say okay, this peptide was there.

But this is not something that we can actually do in high throughout fashion if you have to sit there with a calculator because this is an old ‑‑ this is an easy to explain type of instrument. The current instruments will do this roughly three times a second. And you need a lot of people with a lot of calculators to actually handle that kind of data.

So what we do is we use a computer program. And one of the variants we use is ‑‑ the name is called SEQUEST. And it's ‑‑ it functions by searching the peptides ‑‑ it assigns a peptide sequence to a spectra. The computer does it for you. And the way it does it is you start out with a scan of something of a mass. M/Z you know. So if it's 750 and you knew the charge date, there was two charges on it. Its molecular weight was 1500.

So what this program does is it goes to the proteome, and it starts at the first amino acid. And you say give me every single possible peptide that has a mass of 1500. And you might have ‑‑ depending on the size of the proteome, you could have quite a few of them.

Then what it does is it fragments each one of these potentially candidates virtually and compares that one to the spectrum that you actually acquired. And because there's 20 amino acids and the possible combinations of these 20 things is so high, one peptide wins and stands out against all others.

So here is the one way to think about how it works: basically, it's a dot product. You acquire this spectra. This is a possible theoretical spectra for, say, this peptide. And if you do a dot product multiplying vertically, you get this set of ‑‑ this is the product.

Now, let's just say you have a single amino acid difference change. Well, what you'll have is the same acquired spectra. But this theoretical spectra will be shifted. Everything will be moved just by one amino acid. So the dot product is actually going to be a lot less. And that's sort of a graphical explanation of why this one peptide wins.

So at the end of an experiment where you're doing three acquisitions a second, you get a huge table. This is our format ‑‑ a huge table of what was actually in the experiment. So in the end, you get a list.

This example I showed you of the gel, the analogy's actually a real gel. So in this experiment ‑‑ I don't remember exactly, but I think we found 400 proteins. And you could look at the gel carefully, and there's not 400 bands that are clearly visible to the human eye. And this gets into dynamic range, and I'll go into that in a little more detail.

That was an IP. If you take cultured yeast and you do a day's worth of proteomics after a single separation, you can find many thousand proteins represented by, say, five peptides on average each protein. And if you do the same on mammalian cells, that's about what you're going to find. And I begin with a little caveat that you need to be careful what you wish for because you will get what you wish for, and it's going to be a big list.

And so I just also wanted to address the limitations of this whole technology. So the dynamic range of the process is about three or four logs. That means that basically, some things that ‑‑ roughly one one‑thousandth to one ten‑thousandth of the concentration of the most abundant analyte assuming relatively equal distributions of all the peptides is the limit of what you can see. And the complexity limits the analysis. The machine is sampling. And so the more you have there, the more it's got to sample through. There's ‑‑ we're not ‑‑ at least in this shotgun method, you're limited by your sampling speed.

And generally, your quantitation ‑‑ and we'll get into quantitation in a little while ‑‑ is relative rather than absolute. And that will make sense a bit more in a little while.

So what does this mean? Mass spectrometry‑based proteomics is good at in‑depth analysis of pure sample across three or four logs. So if you study cells, if you study organelles, and you do IPs, you'll get results that span thousands of proteins. And these results could be quantitative. You could, as you'll see ‑‑ say if I add drug, how does the whole proteome change. Just in a completely analogous way to an array but just a smaller density of data.

It's really not good at studying samples that have dominant proteins and tremendous dynamic ranges. And the proteotypic difficult sample to study is plasma, one protein or a CSF for urine. So tissue culture media that has calf serum in it or any serum in it represents a major challenge.

So then here we are asking the question well, what can this do for me. And we've heard over and over, biomarkers. I don't know how many times in the proteomics business that I've heard about biomarkers. And there's a tremendous promise with biomarkers and proteomics.

And people are looking for bio ‑‑ cellular markers. And this is, I think, Rocky's words, markers that will tell us a story. And markers that tell us how are our tissues are doing or do we have the tissues that we actually want. How's our scaffold? Is our scaffold the same this week as last week? When I implant these, how is my host doing? We all want these biomarkers.

And proteomics and biomarkers have actually not been a success story. People, I think, years ago thought that proteomics is going to come in and be like Batman, just save the day and we're all going to have our answers. And the expectations have been really high. You get these articles in the Science magazine, medicine is going to become personalized. And a lot of the limelight has been focused on proteomics. And to a certain extent, proteomics has delivered.

But the cup is half empty. A lot of people really thought that if you take clinical samples, such as blood from somebody who is ill or with cancer, and blood with somebody who is well, you could look at them side‑by‑side and say here's the biomarker. And as I probably illustrated for you, this is not the analysis that's going to lead to the biomarker. You just can't look at blood.

But there's another way to do it. And that would be target‑based validation strategies where one might be interested in fibroblasts or one is interested in cancer cells or lymphocytes, and you really make an effort to study these tissues themselves in depth. And then you come back and you try and validate in some other media where you already know what you're looking for and say can I see it there. And that turns the question around ‑‑ and we'll get to that in a minute ‑‑ to a more approachable methodology.

And so here's just why serum is such a challenge, and we probably all know this. But albumin makes half of the total protein mass in serum. And the 22 most abundant proteins make up 99 percent. So if I told you this method's worth three logs, maybe four, the first two are burned on the proteins you care the least about. So the discovery of low abundance proteins and low abundance biomarkers right out of serum is a big challenge.

So I just wanted to focus a bit on the analysis of cells. And protein lists can be assembled, and you will get your wish. If you do a good experiment and you provide a nice, good, well fractionated sample, you will get a very large list of what's going on in the cell through the first three or four logs.

You'll see thousands of proteins. They're all potential biomarkers. I don't really have a good way of telling them apart for you. And this might really help some people. So of the talks I heard yesterday, I was thinking about the embryonic stem cells. You can focus your analysis on membrane, mitochondria, ribosome. If you are focused, you can get a list of proteins that might be really helpful to you.

If you're not focused, you're going to get a list of proteins. And so one would ask well, if I don't really know what I'm interested in, how do I just sort of follow them all. And I will transition into targeted proteomics. And so what I described earlier was a method where you look at what's there and you say ‑‑ you basically go through what is there from tallest to shortest, and you try and identify it.

And I think there's another way to do it, which begins with I have a candidate list. I know these are the end proteins I'm interested in. And I'll call this Atlas‑based proteomics in just ‑‑ you'll see why in just a minute.

I'm not a visionary, but my crystal ball says this is a big part of where the field of mass spectrometry‑based proteomics is going. And a large part of that is because when you have to rediscover everything, every time you do an experiment, to fight through all the metabolic proteins, all the ribosomal ‑‑ I can ‑‑ the abundant proteins ‑‑ most of the effort is just rediscovering. So I think the world of mass spectrometry‑based proteomics will be waking up to a targeted analysis.

The goal of this whole methodology is to create a list of peptides that represents the proteome that you are interested in. And obviously ‑‑ well, not obviously, but what you want to do is use non‑redundant peptides. You don't want a peptide that represents five proteins. That's not very useful.

What you will do is use a variant of mass spectrometry that's called MRM or multiple reaction monitoring mass spectrometry to ask the question present, absent, how much. This has a higher dynamic range. It's a much higher sampling speed method, but it's still not magic.

So let me show you how this works. Going back to our triple quad, the first quadrupole here does its filtering thing. So let's just say we want the red peptide. The second one breaks it just like before. And now you have the fragments. And normally, we do the radio tuning thing or scan thing where you scan through the radio. And that's actually quite an inefficient process because you're only hitting every frequency for a very short amount of time.

So this process says ‑‑ it's like using the buttons on your radio, channel 1, channel 2, channel 3, channel 4. And at any given moment, you might set this for channel 1. You say is the fragment that I'm looking for actually there. So you are actually achieving a double filtration to achieve your specificity, one filter, two filters.

The way I would describe is here's the peptide. Here's the shotgun spectra. This is the scanned spectra. Now, each one of these colored peaks represents a tall peak, and I'll go back and say a Y ion breaking the ‑‑ keeping the arg or lys from the right ‑‑ and that will be important in a minute ‑‑ and taking that type of ion and saying okay, I would expect this spectra to have these peaks.

And so you just program those in as buttons 1, 2, 3, 4. And if the peptide is really there, it should produce all fragments. And because we're doing chromatography, the peptide ‑‑ all these fragments better be there at the same time, and they better look about the same.

And if they're all there, you can have a very high confidence that your peptide was actually there. So each measurement is called a transition. And what you're asking the machine to do is push the button really quick. And effectively, this can be done in roughly 10 milliseconds. So in one second, you can go through a hundred of these transitions.

So if your peptide is eluding from your chromatography column over 20 seconds, you'll come back and cycle through the whole list and really get a point roughly every second. And you'll get a nice ‑‑ and a nice one of these elution peaks, which has a calculable area under the curve.

So I have in my lab a Waters triple quadrupole. And this machine allows 32 individual segments that can be scheduled. So I can put right now a total of 1,034 individual transitions in. And because I've already studied these peptides and I know when they're going to come out, I can say, well, look for this particular peptide only in this group but don't look for it way down here. It's not going to be there.

And if you schedule them at any given time, there might be only three or four trains going on at one time. And if each one has 32 in it, that's roughly 4 times 30, a hundred at any given time. And you're roughly cycling around at once a second.

And so if one calculates that one can do a hundred proteins per run, that means that if you're really ‑‑ if you're interested in a hundred proteins per run and you say, well, in order to really convince myself that there's ‑‑ that the protein is there, I'm going to look for three peptides from that protein and then three fragments from that peptide. That's roughly how I get to a hundred because that's a hundred times 3 times 3, which is roughly a thousand.

And so if each run takes an hour, 60 hours later, you can cover roughly 6,000 proteins in a proteome. So this slide basically just shows that the sensitivity goes up when you use this type of an analysis. And this slide shows that this up‑and‑coming field is supported by software. So we've actually developed software to allow you to look at all these different transitions, say do they line up, are they all grouping together over here, and actually spit out the areas under the curve for each one.

And so going back to this PeptideAtlas, we ‑‑ there at the ISB, we have ‑‑ we've built atlases of all the peptides that we've seen from different proteins, from all kind of researchers, from around the world. So instead of actually discovering these things, we can actually go to an atlas and say, okay, what three peptides from this protein would we want to use. When did we see them in terms of retention time?

And the goal here of my near‑term research is to use this as a schematic representing the software, basically say give me three peptides for every protein, three fragments for every peptide. And I'm going to program into my machine, and I'm going to try and build an atlas not of the theoretical but the ones I can say these are good.

And then, for instance, if you're interested in studying something, you can look this up and say okay, which ones should we use and they'll be there for you.

So the implications of this are that shotgun proteomics with the duty cycle issues and dynamic range limits is, in my view, going to be superseded. It may be ‑‑ increase the ability to interrogate for a large number of targets in a biological sample. And it may help deal with the dynamic range.

How long do I have? Three minutes.

So let me just touch one the quantitative implications here.

So this peptide is a tryptic peptide. There's an arginine at the end. And here's the sequence of arginine. You can buy arginine where all the carbons are heavy by a neutron and all the nitrogens are heavy. So effectively, this is heavy arginine. It's not radioactive. It's just isotopically heavy.

And the mass spectrometer sees the difference. It's a mass measurement device. All the chromatography can't separate heavy and light. That's why nuclear programs that separate uranium need these fancy centrifuges. But basically, they all show up on the mass spectrometer at the same time.

And I can see there's a light, and there's a heavy ‑‑ and there's an intensity associated with each one, which is quantitative. And so I can also then just follow the elution profile of each one, calculate the area under the curve. And I would say that there's a ratio associated with each one. So I can say the ratio here is 1.5:1.

So ultimately, what we'll do is if you know the target endogenous peptides that you wantm which have the normal isotopic distribution of carbon, you can call Sigma, they'll make you the heavy peptide. What you do is you program your triple quad to find the light fragments of ‑‑ and this is why the Y ions are important because they have ‑‑ it has to include the arginine. If you take from the other end, you're not going to have the heavy information. And you have ‑‑ then you say, well, give me the fragments I would expect for the heavy one, which will be just a bit heavier. You can program this in to the instrument.

And in the end, you can say okay, for this fragment, I got this peak and this peak and integrate the areas on the curve and say okay, the ratio might have been 2.0. And you kind of want all of them to be 2.0. If one's 2 and one's 4 and one's .5, you might have a problem.

But in my experience ‑‑ and we just actually finished this yesterday. We did analysis with 250 transitions representing, say, a hundred peptides. We look for the heavy and light, and there's very good agreement between all the fragments as one would expect.

So how will this ultimately play out? There may be one day when every one of our unique transitions that I told will define the proteome may have a heavy peptide that you can buy. And that way if you spike it in, it is an internal standard. And so if you know how much you put in and you know the ratio of endogenous to added, you actually have a mechanism for doing absolute quantification.

So let me get back to scaffolding. I think this is one way to look at proteomics. And at some point with your scaffolding project, you'll have to do some discovery. You'll have to say, well, what is it that I might be interested in, at least the protein level. I think you're going to have a list.

Then you're going to have to figure out for yourself how am I going to validate this. Will I use proteomics? Perhaps. Will I use an antibody? Will I use PCR if it's an enzyme? Will I use an enzyme assay? I think that's up to you as it might be scaffold appropriate.

Thinking back to some of the things ‑‑ I sat in the back yesterday and rewrote my talk three or four times because I don't think in terms of identity and all the FDA terms that are totally new to me.

You might ultimately discover proteins for in vitro assays while you're developing product and during production to monitor yourselves. You may monitor your process at the protein level with targeted mass spectrometry or affinity agents such as an antibody chip. Someone was talking about if this is a closed bag, how do we monitor this in a non‑destructive way. I know when we take blood, they have little tubes and you can clip them off. You can sample sterilely, I imagine. And so you might be able to develop an assay that you sample the proteins along the way. I don't know how you sample part of a bladder. That's again a specific problem.

You can monitor the growth media, and you can do target analysis. And you might be able to do ‑‑ monitor the host if you know what to look for after you've implanted the device at the protein level. Again, I'm not saying you can.

But I also just want you to just walk away thinking proteomics is just a tool. You can't screw in a light bulb with a hammer. Okay? It's not going to solve all your problems. But if you use it correctly, if you use a hammer correctly, you can do a lot of things. And I think the same is true of mass spectrometry.

There was a lot of ifs yesterday. If I had the tool to do this, if I had the tool to do that, and if we could measure this ‑‑ and it just reminds me ‑‑ I want to close since it is the Christmas season ‑‑ of a saying that I remember. And it goes something like, "if ifs and buts were candies and nuts, we'd all have a merry Christmas."

And I hope that I haven't been too much of the Grinch saying oh, you can't do this. And I haven't ‑‑ I don't want to sound like Santa Claus that oh, it's going to be great and you're going to get everything you want. I think the truth lies somewhere in between.

And with that, if there's any time left, I'll take a question.

DR. PLANT: Thank you, Dan, for giving us the real scoop.

Anybody have any questions about proteomics for Dan?

MR. AZEKE: John Azeke, University of Florida. In the beginning of the talk, you said that you digest these normally with trypsin. How do you go about now taking the digest and determining which protein you have from the different fragments of the protein that you have chopped up into your peptides?

DR. MARTIN: Well, the work plan is effectively you take the protein digest of the peptide and then you take every one of those peptides, you clean them up with a clean‑up cartridge, so to speak. And so when you have a whole lot of peptides in a tube, you basically put them in the mass spectrometer and say tell me what's there. And the machines out of the box will generally do that for you. You put a column on, and they're eluted in over an hour. And the hundred that are going at any given time are sampled, and you might hit 22 of them. And the machine will run through that process I outlined and say these are the ones that were there. That peptide then gets mapped to the proteome. And the problem then becomes well, what if the same peptide's in four different proteins; how do you know what protein is there? And we tend to use an Occam's razor‑type approach, where we look at all the peptides and we say, well, what's the least number of proteins that explains all the peptides. And from that, you map to a protein list.

MR. ROWE: David Rowe, Connecticut. So in its later iteration, how sensitive are you now? How many cells do I have to give you to get a ‑‑ this very focused digital answer?

DR. MARTIN: Well, the machines are finite sensitivities. And they see things at the level of reliably femtomoles. Let's just say a certain number of femtomoles.

So you have to ask yourself what am I looking at. If you're trying to find histome protein, ribosomal proteins, you're going to need far fewer cells than you would need to find something that is at 10 copies per cell. So just take your molecule of interest, back calculate how many ‑‑

MR. ROWE: Just say I wanted say a thousand copy.

DR. MARTIN: Well, let's see. 10 20, you'll need 10 6 cell. I wouldn't start an experiment without 10 7 cell.

MR. ROWE: So 10 7?

DR. MARTIN: If you put 10 6 in, you might find it. But you're asking me to do the calculation in my head, and I ‑‑

MR. ROWE: Don't ask me.

DR. MARTIN: I'm totally sleep deprived. The train was running right behind my hotel room last night, and I can't really do powers of 10 right yet.

MR. ROWE: It's just that ‑‑ certainly, the ‑‑ it is to me whether it's microarray or proteomics, it's the biology which is the real driver of this. If we give you a complex mixture of cells, we're not going to be able to interpret the outcome. We need to give you homogenous populations of cells to interpret what a cell is making. And so ‑‑ that's ‑‑ we're having trouble getting high numbers by FAKs of cells. So if we have to do this by FAKs and get a million cells, and we can do it when they're ‑‑ it's a challenge for us.

Even doing this double labeling, that doesn't increase the sensitivity? Can you do it as a ratio and increase the sensitivity?

DR. MARTIN: The machine is a sampling machine. You got to have a certain number of moles for the thing to see it. It might be femtomoles. If you know what you're looking for, the number goes down. But ‑‑

MR. ROWE: You need an amplification system.

DR. MARTIN: You got it.

DR. NEREM: Over here, Bob Nerem.

Over here, Dan. Maybe I missed it, but can you give us any insight as to advances being made in applying mass spec to tissues?

DR. MARTIN: You mean a tissue being like a whole liver or a whole monoculture or somewhere in between?

DR. NEREM: Well, yeah, I mean a tissue being a three‑dimensional structure taken from the body with cells and extracellular matrix and so forth.

DR. MARTIN: Yeah, the technology's there to look at tissue. There are resources on the Web. You can go and see what happens when you grind up a liver. And you get out what you put in. It's simply a sampling technique. And in the discovery and the shotgun method, you will find things in the order of their abundance. And so with the liver, you'll probably see a lot of albumin. You'll see the connective tissue.

You'll see ‑‑ in every experiment when you start with unfractionated cells, you see all the metabolic proteins, the ribosomal proteins. So you sample ‑‑ it's like the reverse food pyramid. You start at the bottom. And in order to see up in the pyramid, you just got to keep sampling and sampling and sampling, which is why these atlases ‑‑ once they hit a certain content, they grow very, very slowly.

And because what we're doing is assigning a peptide sequence to a spectra, there's a certain level of false positive. So if you just keep running the machine and running the machine, you're going to have a sort of slow, constant upwards slope that represents your false positive. And so the quality of what you get out depends on the quality of the sample that you put in.

So the purer the prep, whether it's an IP, a fragment, a fraction of the cell or a monoculture, you're going to get better coverage of the peroxisome when you fragment the fractionated peroxisomes. If you look at whole liver ‑‑ well, not liver ‑‑ if you look at whole kidney, you're not going to see a heck of a lot of the peroxisome proteins covered.

DR. PLANT: I'm going to make a computer switch while we take at least one or maybe two more questions. Let's do Chuck first, and then we'll get to you, Rocky.

No?

DR. DURFOR: No. It wasn't me.

Since it's the holiday season and you're offering presents, I thought I'd maybe ask for one.

The whole presentation you gave is really based on one method of introducing samples into the mass spec, and yet there are many others, laser absorption, something else like that.

That suggests that there may be a possibility and that's what I'd like to hear you comment on. If you were to use another method of bringing your sampling to the mass spec like laser absorption, you now have the ability to sample specific areas. And so you're not only getting the proteomic information, but you may actually be able to get in and look at the heterogeneity of a cell population because you're directing your sample to a particular area on a scaffold. And I wonder if you'd speak to that.

DR. MARTIN: Yeah, there's probably a whole's day worth of lecture on tissue sampling by having a mono layer of cells, coating them with something that really helps them ionize when you shoot them with the laser and looking at what's there. And there's people who are intently interested in this. And perhaps for the next session, you can bring someone in who would actually do that. I only know that tangentially just because my field is pretty busy as it is.

By and large, it becomes a lot ‑‑ it's the same thing. I know I sound like a broken record here. But you take a cell, and you zap it, and you think about how big is my laser spot. So the laser spot is a few microns, 10 microns around. Well, that's one cell diameter. You have the issue of how many moles am I getting in and how many proteins am I trying to sample at the same time. And it's one thing if you want to look at the profiles of whole proteins and ‑‑ it will provide you different information, and it may be informative. But I probably ought to end it there.

DR. TUAN: Yeah, Rocky Tuan here, NIH. So great talk. I just have a question about sort of the differential aspects of this technology.

Sort of the power of microarray‑based analysis is, of course, you have two samples. You just want to know the changes. You don't really know. You don't really care about things that don't change, right? So you mix the two of them. And red and green, you get orange. And you only look at the orange. I mean look at the red and green rather and not the orange.

So in the mass spectrometry field, how good is your subtractive procedures? Whether it is just instrumentation or is it mathematical or what have you, I mean can you ‑‑ is that being developed in some way that will raise the sensitivity issue in terms of looking for those things that change?

DR. MARTIN: Absolutely, it is. So the delta that you're talking about, the two‑sample comparison, is based on looking at a light versus a heavy peptide. And so what I showed you is you look at the areas under the curves. And so sort of the finite ratio levels that you get are really beyond 4:1. It's immaterial because you're actually sampling noise.

But the technology has made dramatic improvements in how do you derivatize peptides such that one condition is heavy and one condition is light. And some of the ways people do that is actually growing your cells, if you have the luxury of growing them, in media where all of the arginine and lysine is heavy. You basically do metabolic labeling. And when you actually compare your samples, it's right there for you. So every peptide has a heavy partner.

Now, the quantification is ‑‑ it's coming along. And there are issues because you now have to integrate not just one spot, there might be 15 different peptides. And so there's all kinds of computational solutions to doing this. But you can study a proteome and say what peptides differentiate these two samples and therefore what proteins are changing.

DR. TUAN: Thanks.

DR. PLANT: Our next speaker is Andres Garcia. Dr. Garcia is an associate professor and Woodruff faculty at Georgia Tech. Prior to Georgia Tech, he was at U Penn working with Dave Beneker on the interaction of extracellular matrix and membrane protein receptors in cells.

He has numerous honors, including being a fellow of the Woodruff School faculty and of the American Institute of Medical and Biological Engineering.

His current projects involve manipulations and analysis of cell adhesion receptors in their extracellular matrix, ligands.

And, Andres, thank you very much for joining us.

DR. GARCIA: It's a pleasure to be here. I want to thank the organizers for the kind invitation.

I decided to structure my talk to give you an overview why adhesion is important, why should we care about adhesion. And then I've broken my talk in two parts. One of them is describing methods to analyze adhesion, which is along what the FDA asked me to talk about and how the field is currently approaching ways to manipulate those adhesive interactions for a functional endpoint.

I was very heartened yesterday when I heard a lot of talks about integrins because that would be the major receptor that I will talk about today.

The key concept out of my talk is I want you to think and really realize that adhesion is a tightly regulated dynamic biological process. And the key word here is "process." It's not taking a snapshot of how many cells are on the surface at two hours. The adhesive process is involved throughout the lifetime of the cell and the function within the surface.

This is one of the most intensively studied areas of biology for over the last 50 years. And numerous studies have shown that adhesion is central to the formation, maintenance and repair of numerous tissues.

As such, adhesive considerations are also involved in many pathological conditions such as metastasis, blood clotting and wound healing defects. Because of the central importance of adhesion in these physiological and pathological conditions, adhesive considerations are also critical to many biomedical and biotechnological applications. And I've listed several there.

The key point about adhesion is that these adhesive interactions really provide two things. As an adhesive process, this is a mechanical event. And these adhesion or anchorage of the cells permit and allow cell migration but also tissue structure and function.

Also, over the last decade, it has become evident that adhesive interactions trigger signaling pathways that regulate cell cycle progression at differentiation and even cell death decisions.

So we need to think of adhesion as a signal transduction element in the interactions and the responses of cells within their environment.

When we look at interactions between cells and surfaces, again, it's important to stress ‑‑ and this is the mechanism of this adhesive interactions ‑‑ is that they involve specific receptor interactions with adhesion proteins or motifs on the particular biomaterials. It could be either synthetic or biological.

These interactions or the adhesive process involves either one or a combination of three major mechanisms. There are proteins that absorb from physiological fluids onto the surface and the cells directly recognize and adhere to motifs on those proteins. And there's many examples of that. This could be, for example, adhesion of cells to ‑‑ in culture, which is primarily mediated by vitronectin that absorbs from the serum on the surface. In the case of an in vivo implant, there the major adhesive protein is fibrinogen. But, again, this is absorption from physiological fluids.

Depending on the cell type, many cells will secrete and deposit a rich extracellular matrix on their surface. And maybe initially you think the cells are here via a particular protein, but over time the cells will remodel that matrix and secrete and lay down their own proteins.

Then finally, over the last decade, there has been a major shift or focus in the biomaterials field to present or engineer bioadhesive sequences on different surfaces such as RGD. And in this case, these peptides can target and be recognized by endogenous cellular receptors.

Again, I need to stress that these three mechanisms that mediate the adhesion of these cells to the surfaces are very dynamic in nature, which is the dominant mechanism may change in as much as a few hours of your cell's interaction with these materials.

There are five major adhesive proteins in cell systems, and out of this, integrins represents the primary mechanism to extracellular matrix proteins. The integrin receptor is a heterodimer, and it has two subunits, an alpha and a beta subunit. And it will only be expressed as a heterodimer on the surface of the cells.

In this case, it's important to realize that cells will express multiple integrins even for the same ligand. So in the case of primary human osteoblasts, which are the cells that regulate mineralization and bone formation, they express eight different integrins that bind fibronectin. So they have eight receptors that bind the same extracellular protein.

This is not a redundant system. The reason for this is that there is new evidence suggesting that different integrins modulate or control different facets of these adhesive interactions.

Okay. So shown here is just a topographical map of different integrins as they're broken down. And you can see that there are ‑‑ for a particular beta subunit, there are multiple alpha subunits that will dimerize with this receptor. And, again, multiple cells will express multiple integrins for particular ligands.

Here, just a list of different integrin receptors and their particular ligands and the binding sites. And you can see the prebalances of the RGD on those surfaces. From this table ‑‑ and I don't want you memorizing it. There's no quiz. I told the organizers that I was not going to give you a quiz today.

You can see that there are receptors that are highly specific for one particular ligand. And we have other receptors that are very promiscuous. The alpha beta 3 will essentially bind RGD in any conformation in a variety of natural and synthetic matrices.

Finally, it's important to realize ‑‑ and I give a brief table here ‑‑ that there are particular integrin interactions that have been associated with controlling proliferation and differentiation in different cell types, but they do not seem to be unique.

This is, for example, A 5B 1 binding to fibronectin. It's important in the differentiation and proliferation of the cell types listed in there. But you can also see that there's overlap between interactions with another integrin, alpha 2 beta 1 and (inaudible).

Again, as I mentioned at the outset, the adhesive process is a complex biochemical event. And it involves binding of these receptors, and as the integrin bind, they become clustered.

When the integrins cluster, they form the nucleation site for the assembly of supermolecular structures that I will term as focal adhesion. And these focal adhesions really provide the sites for the assembly of strong structural components. And these are the sites of large adhesive forces. But they also recruit a variety of signaling complexes. And they're believed to be the centers for signal transduction from the outside of the cell towards the inside.

You can draw all sorts of funky diagrams that are very complex here. And that's just to illustrate the complexity of the system. To date, there are hundreds of proteins that have demonstrated to localize to focal adhesion structures.

I want to stress that these adhesive or focal adhesions have been demonstrated to sites where growth factor signaling events and adhesive events are integrated. So there's integration between separate pathways in directing signaling responses.

Okay. So how do we measure adhesion? There's a variety of implicit assays, people plate cells. If the cells spread faster, they assume that there's more adhesion. That's not very rewarding for the engineers.

There's a variety of qualitative assays. You plate your cells. Spread them with a pipette. Stick your tongue out at the cells. Hopefully, some of the cells will fall off. You count and you say I have counts of cell adhesion, right?

But you can all appreciate ‑‑ and obviously, I'm joking. I'm exaggerating some of the extent. You can appreciate that these methods provide uncontrol and to some extent forces that are not reproducible, particularly across different labs to control the adhesive process.

So different efforts have focused on the develop of quantitative assays. And you can typically categorize them by the format of the applied force. And they fall into three general categories, which are the ones that I've listed on that slide.

This is to scare you a little bit. And, again, I'm not going to give you a quiz. Very complex slide. What I want to show here is that it doesn't matter which assay you have here.

Each of those assays will have advantages and disadvantages. There is no clear‑cut assay to measure adhesion strength. And you need to make a compromise, and you need to make a decision what sort of information you want to obtain from this process in order to make the comparison. But there's clearly not one assay that will work in all situations.

So I've shown two examples here of assays that we use in my lab on a regular basis. One is a centrifugation‑based assay. And, again, the important thing here is not the details. It's just that you can see that you get very good data, that you can discriminate across different experimental conditions.

So a centrifugation assay is beautiful. You can do it in a sort of high throughput fashion. We do it on a 96‑well plate. A lot of conditions, all labs pretty much have a centrifuge. So you don't need very specialized equipment to this. And this is a really easy assay. My 12‑year‑old son has actually spun cells off a dish and gotten good measurements. You don't need heavy‑duty training.

The down side, it's a very relative assay, and the assay's very limited in the range of forces that you can apply. So you get to a point that you can detach the cells from the surface of the dish. For a fibroblast in serum, that's about an hour. So that's very early into the adhesive process.

I am a big fan of the hydrodynamic assays. And the one that we have developed in my lab is the spinning disk assay, and we can't go into all the details as to why we do that. But it's an assay that relies on applying a hydrodynamic force to cells adhering at the surface of a material.

After the assay's done, we can count the the number of cells that remain attached as a function of rate of position, and we have validated our system, and we know the rate of position, how does that to the applied force in the cell.

This assay has ‑‑ we have obtained what I think are accurate and precise measurements over a ten‑year span. That value that we get is absolute, and it hasn't changed during measurements in Philadelphia and in Atlanta. So the water didn't matter. The people doing the assay didn't matter.

So it's a very precise assay. And what that allows you to do is, for example, here we've been able to do a very systematic and rigorous analysis of the adhesive process as a function of different focal adhesion components, different conditions on the substrate.

It's a very, very sensitive assay. The down side is that it's a very skill intensive assay. My 12‑year‑old son can't do it. But if you want those precise measurements, this is a great assay to use.

So that's looking at adhesion as a mechanical event. How many cells are there? You apply some sort of a force, and you count the number of cells that are there.

But as I mentioned again at the beginning, the adhesion event is a process. And there's that mechanical aspect. But the downstream signals that are triggered by the adhesion process are just as important.

There's a slew of different biomarkers, outcomes that you can look at. Again, there's not one single assay to look at. And in all the analyses that we do, we end up picking one or two output measures from all this samples, from the different categories shown on this slide, depending on the cell system and the application and what sort of information we have to do that.

But these analyses are comprehensive, and they have to be integrated. There's not one single parameter that will give you an idea of what ‑‑ adhesion is a process.

Finally, this is something that ‑‑ it's extremely important. And unfortunately, the majority of the work in the field doesn't do this. And that's to understand what is the underlying adhesive mechanism.

In order to do that, you need to analyze, let's say, the integrin binding components that you have in there. And my lab has developed assays to do that. But more importantly, there has to be function‑perturbing experiments. This is a complex process. I mentioned the dynamic process.

If you're really interested in understanding how the cells are interacting with the surface, you need to go ahead and do blocking experiments either with antibodies or INAI to really establish the mechanism controlling these responses.

So I wanted to show you an example of some of the work that we did on analyzing this to show that this issue of integrin binding is important. And these studies were geared towards how biomaterials surface chemistry influenced activity of fibronectin.

The motivation from this work is a large body of literature, including work from our lab showing that fibronectin, which is a major extracellular protein, has a very flexible structure. And the underlying biomaterial chemistry alters the structure of that absorbed fibronectin.

We also show that those changes in structure also influence the activity or the way cells recognize fibronectin. So in these assays, we use self‑assembled mono‑layers that present well defined chemistries. And we have these four basic chemistries here. And we culled with the same amount of fibronectin.

We demonstrated that on these different chemistries, the structure, particularly the RGD binding fibronectin, was altered on the different chemistries. So we plated this immature osteoblasts cells in this case and looked at integrin binding of the two major integrins that these cells expressed for that fibronectin.

And what you can see is that you have a large difference in integrin recognition or integrin usage across these chemistries. We have surfaces like the methyl terminated surface, in which the fibronectin is in such an orientation or activity that supports poor binding of both of these integrins.

I want to point out that these integrins both compete with each other for fibronectin. We have surfaces that are very promiscuous, and the fibronectin on the surface supports good binding of both integrins.

Interestingly, we have surfaces that preferentially support binding of one integrin over another. So here we had an experimental system in which we had the same matrix protein but we have different activities and different usages on the cell same type. And we use this system to examine how changing integrin binding specificity affected cellular responses.

As I mentioned, we did our battery of assays, including initial adhesion binding, initial signaling as shown here by FAK, some more long‑term assays which involved gene expression. And what all these assays showed is that we had differences in the activity of the surfaces where the OH and the amine surfaces appeared to have a better activity than on other surfaces.

But as in all the assays that we do, we need to have a assay test or a functional assay. And for the case of osteoblasts, which is a major cell that we work with, it's a mineralization assay. These cells control and regulate mineralization on surface. And what you can see is that when we plate these cells again 14 days in the presence of serum on surfaces that were coated with the same density of fibronectin, we can see that we have surfaces that preferentially support mineralization on this system. And the mineralized deposits are these black deposits here.

We see that the surfaces support high levels of A 5B 1 binding to fibronectin, supporting high levels of mineralization. That was not a surprise because our lab and others had demonstrated that that's a key receptor in the differentiation process of these cells.

However, we were very surprised to find this surface here which supported high levels of A 5B 1 binding, but there was mineralization on the surfaces. And we hypothesized that the lack of differentiation on these surfaces resulted by an inhibitory or a block from the A 5B 1 integrins.

So again, because of the differences in usage, we hypothesized that differences in integrin binding specificity regulated differentiation, and we tested that hypothesis using blocking antibodies in culture.

And shown here are the results for a surface that supports high level of mineralization. Again, under controlled conditions, we get good levels of mineralization if we block the human fibronectin, which is what we coat with with an antibody, or if we block A 5B 1, we completely block differentiation on those surfaces. If we block A bB 3, there's no effect on the surfaces.

The surface that we were interested in analyzing is this carboxyl terminated surface. And, again, under controlled conditions there was no mineralization on those surfaces. If you block fibronectin A bB 3, there was no effect. But surprisingly, when you block A bB 3 ‑‑ again, eliminating this here, we rescued that block in ‑‑ or the inhibition of mineralization.

So the way we interpreted these results is that this ratio of relative binding of the two integrins ‑‑ in this case A 5B 1 versus A bB 3 for fibronectin ‑‑ directed mineralization. And we have subsequently demonstrated in other cellular systems that these principles of integrin bindings specificity really regulates cellular responses on different biomaterials surfaces.

What it suggested to us is that if we can engineer biomaterials to control not just adhesion or integrin binding, but specifically which receptors the cells are using to bind, we may be able to engineer cellular responses.

So in terms of manipulating adhesive interactions, a very promising and widely used strategy in the field ‑‑ and this was really pioneered by Jeff Hubbell over 15 years ago ‑‑ is to direct adhesive interactions, presumably via integrin binding by presenting bioadhesive motifs.

These take the form of either small peptides or even large protein fragments. And the idea is to functionalize non‑adhesive substrates with short adhesive motifs. And, again, most of you are familiar with RGD, and these come in different flavors. You can have linear RGD. You can have cyclical RGD and other domains.

These studies ‑‑ and I tell you, I'm on the editorial board for Biomaterials and JBMR, and if I get another paper that they put RGD on a surface and tell me that the cells stick, I'm going to throw up. I mean there must be 10,000 papers that show this response. So these surfaces do support integrin binding. They promote adhesion and migration and support proliferation and differentiation for various cell types in vitro.

There are many advantages. These provide a biospecific mechanism to target adhesion. You avoid other interactions by focusing those active domains to the short peptides, and they really allow themselves the synthesis of synthetic and new hybrid materials.

There are some major limitations for these approaches, however. First of all, there's a significant loss of activity between RGD and your native matrix protein. For example, based on our adhesion strengthening mechanisms, we estimate that RGD is about 50‑fold less active than native fibronectin on a per molar basis.

In addition, there's emerging evidence for various labs ‑‑ and I'll show you some data ‑‑ that while in vitro, RGD appears to work very well, the in vivo results are fairly modest, and in many cases are very disappointing. So it works very well in vitro. In vivo, it doesn't appear that it works as well.

Finally, if you're going to target these receptors via these adhesive motifs, you need to know what the motifs are. And RGD is not the only motifs that will bind integrins.

So just to show you some samples from the field ‑‑ and I just picked them because I had these papers scanned into my computer. Some of the early work from Jeff Hubbell, if you take control PET and you plate endothelial cells, the cells really don't attach very well. If you add RGD, you can get better cell spreading. Also, in three dimensions, you can functionalize fibrin to support neurile outgrowth. And this study shows that the actual density of that immobilize or exogenous RGD regulates the extent of neurile outgrowth on these matrices.

Work from Dave Mooney's lab show that if you vary the density of RGD within an alginate, you can regulate the extent of cell number or proliferation in the systems. And in some instances, you can modulate differentiation.

And, for example, Kevin Healy has also shown that when you present an RGD domain on a titanium surface, you can promote mineralization in vitro to similar levels that you see in tissue culture plastic, which, as far as I know, is not FDA approved to implant in the body. But, again, it works great in vitro.

So as I mentioned, the in vivo results with RGD have been mixed. And it really depends what application you're doing. For cell targeting, either for tumor applications to the liver cargo with drugs or for vascularization applications, RGD works relatively well. And I think the reason is that the vasculature system on endothelial cells, the A bB 3, is a major receptor regulating the responses and that receptor is an RGD receptor. So I'm not too surprised by that.

There is a report that RGD enhances wound healing, but there's really no subsequent follow‑up studies in that.

In the case of osseointegration, which is probably the system that has been examined the most after the targeting, there is data from two papers and then one more from our lab that RGD doesn't really enhance any integration. So, again, it's not clear that RGD in vivo will really result in enhancements in activity. As I mentioned, one of the reasons for that is that RGD displays a significant loss of activity when compared to fibronectin.

This is just data from Richard Oreffo's lab looking at differentiation of mesenchymal stem cells. So the pink stain is differentiation. And you can see that RGD, when you mobilize RGD to PLA, you get an enhancement in the amount of pink staining. But that's not nearly as good as you get for ‑‑ with fibronectin.

So the limitations for this RGD surface has come from three sources. First of all ‑‑ and this one should be probably the most obvious ‑‑ this is the structure of fibronectin. You're only cutting out a very small domain of this molecule. And you're omitting or avoiding access to domains that are important to the regulation of the integrin to those receptors.

So, for example, we demonstrated that for binding of A 5B 1, which is my favorite receptor, if you haven't figured it by now, you need presentation of the RGD side and in addition, the synergy side that was identified by Ken Yamada.

So if you look at adhesion strength, wild type fibronectin that presents both domains give you good levels of adhesion strength, but if you analyze fibronectin mutants that have either the RGD deleted or the synergy side deleted, the receptor doesn't bind.

So we need presentation of both the RGD and synergy side in order to target this specific receptor.

As I mentioned earlier, the integrin specificity binding is important for differentiation. We also show it in skeletal myoblasts, that when the cells ligate fibronectin via A 5B 1, you get good levels, over 60 percent of the cells differentiating to myoblasts as shown here by this stain here, whereas A bB 3, there's no differentiation. So the particular integrin that the cells use to bind is important.

Also, finally, there are integrins, notably the collagen integrin, that doesn't bind RGD. So if you're targeting an integrin that is not an RGD integrin, surfaces with RGD are not going to be useful.

So how do we improve bioactivity beyond RGD? There are four general strategies that have been pursued, and I'll give you examples of the references in there. You can try and constrain or do cyclical RGDs. That improves the specificity about an order of magnitude. But if you look at effects on cell function, even in vitro, it's pretty marginal.

Other groups have tried immobilizing short peptides of RGD and non‑integrin domains to generate these mixed surfaces. And, again, the effects are fairly disappointing. To target or enhance specificity towards the A 5B 1, different groups have tried to immobilize the synergy domain and the RGD domain of short peptides. This really has limited specificity because in order for the receptor to bind, you need to have these two domains in the correct molecular structural orientation. And by immobilizing randomly the two peptides, the chances that you hit it in the right orientation are next to none.

The strategy that my lab has pursued is to then develop biomimetic ligands that mimic or recapitulate the secondary and tertiary structure of the native ligand, and we do that via two ways. One of them is using recombinant fragments that essentially cut out the active domain on the fibronectin, which is this fibronectin fragment shown here. And we've also developed synthetic triple helical peptides that promote assembly of the peptide into a triple helix. Both of those systems have demonstrated both increased specificity for integrin receptors and enhanced cellular activities. So I wanted to show you an example of those quickly.

For this system, we wanted to modify titanium. And a lot of the biomaterials effort in our lab is work with existing clinically‑approved materials and develop coatings for those materials. So titanium is a major metal used in dental and orthopedic applications.

In order to control the density of our ligands on the surface of the titanium, we developed a polymer brush technology that essentially has these polymer brushes growing on the surface of the titanium. And the polymer brush provides a non‑fouling background. So proteins cannot absorb to that brush, and cells cannot adhere. Then we can functionalize that brush or functional groups to present our ligands of interest.

So in this study, we were interested again in going back, comparing integrin bindings specificity. We compared two ligands, linear RGD, which is a standard in the field, and, again, that binds with A bB 3 and our fibronectin fragment, which is a recombinant fragment that presents both binding sites to the receptor. And we have shown that it is specific for the A 5B 1.

So, again, if you take these surfaces, if you take plain titanium and you plate bone marrow stromal cells, which are labeled green here, the cells attach and spread very well, even if you keep them at zero 56 days. But on the unmodified brush, we get a very resistant layer. The cells cannot attach and adhere.

But if you take that brush and then functionalize with your ligand, we can recover and rescue the adhesion and we can target the particular receptor that we want.

Again, I don't want you guys focusing on the data too much. But if you look here when we do blocking studies, we can show that on the RGD titanium, the adhesion is primarily mediated by A bB 3. That's this hash bar here. When you add an antibody that blocks that, we eliminate all adhesion to the surfaces. When we present our fibronectin fragment, you only eliminate the adhesion when you block A 5B 1. So this study demonstrates, at least in vitro, that you have this specificity between the two receptors.

If we look at any of our other markers in this case, osteocalcium by gene expression or mineralization in incorporation of calcium, you can see an enhancement of the fibronectin fragment over RGD. And this, I have to say it's an eqi‑molar basis. We have the same density of the active domains.

The reason we moved to this system with the polymer brush is that we can then implant them in an in vivo environment. We were very interested to develop in vivo surrogate models in which we can test our biomaterials and really translate our in vitro results to in vivo components.

This model essentially will implant these titanium implants shown here into the rat cortical tibia for four weeks. And then after four weeks, we explant them and do two outcome measures that have clinical relevance, and that is the bone implant contact. So you essentially measure the percentage of the area of the implant that's on contact with the bone. So that's a bone contact implant area. And then the other function or outcome is a pull‑out force to measure that osteointegration.

What I've shown here is regular titanium. So this is our titanium, commercially pure. This is the one used clinically. There's nothing on the surface. And then our polymer brushes and engineered ligands on the surface of the titanium.

If you look at either of these two outcome measures ‑‑ again, the contact area or the pull‑out force ‑‑ you can see that the RGD really doesn't enhance any integration over the native titanium. And I think titanium was a great material. It's osteoconductive when you implant it in there.

So when we compare head‑to‑head the RGD to the fibronectin fragment, we see a significant enhancement in osteointegration by targeting specific integrins for their binding; and moreover, this brush surfaces that are engineered with the integrin‑specific ligand enhance osteointegration when compared to the clinical standard.

Again, here what we're doing is conveying integrin specificity. Importantly, when we vary the density of that ligand in vivo, we also see enhancements as we move on in terms of the osteointegration. So it's not only the specificity, but the density of that integrin ligand regulates in vivo outcomes.

So just to close up, I have some considerations or things that I think are important as we move on and consider assays or comparison platforms. When comparing adhesion, again, I think multiple outcomes are needed to evaluate this processes.

There's a question of whether you need to develop a standard adhesion assay to do this. I think that's going to depend on the needs of what you want to accomplish. It's important ‑‑ again, as I mentioned, it's a process. So it's important to have an understanding of what the evolution of the adhesion process is.

But more importantly, the mechanism, I've seen a lot of work that people are arguing that more adhesion equals enhanced function. I have plenty of data from lab that support that, and I have plenty of data that show the opposite. And I don't think thinking that more cells sticking mean that you're going to get more bone or better cardiovascular integration eight weeks in vivo. I think that is a very naive way of thinking of this.

Most of the analyses to measure adhesion rely on 2D systems. Moving those or extending them to 3D poses significant technical challenges. I'll be happy to address some of that afterwards.

I also want to point out ‑‑ and as I mentioned, that in addition to the adhesion in terms of how many cells remain there when you apply a force, we need to have a good understanding of all the signaling and downstream effects that happen from those adhesive process. And there's some potential categories or lists that we should consider.

In terms of general considerations, a lot of work is done with cell types that I would argue are not appropriate for the application that you're trying to understand. And I think a lot of effort needs to go in defining appropriate cell models to examine the responses that you want to use.

In addition, we need to extend this to co‑cultures, to those systems ‑‑ there's a lot of evidence that cell‑cell interactions actually have a heavy influence in regulated cell ECM interactions and also likewise, cell ECM interactions regulate cell‑cell interactions.

One of the things that's very frustrating about the field is the lack of appropriate controls and reference conditions. And particularly, the reference condition I think is very, very important, given the wide variety of adhesion assays that are available.

I think it is important to establish what the adhesive mechanism from the ligand side is, whether it's an absorbed protein, an engineered ligand. Is it what the cells are secreting? Is it a remodeling of the matrix? Again, these are hard things to assess. But I think in order to move forward with informed decisions for in vivo studies, we need to have a good understanding of what's happening in terms of the cell. And, again, I think the density, activity, conformation, specificity of those ligand are critical for the subsequent cellular responses.

I mentioned about the cell‑cell effects. There are issues I also relayed with you, these experiments in serum, plasma. We supplement ‑‑ and I think something that was mentioned yesterday by many of the speakers is I think all these analyses ‑‑ I know that our charge was to focus on in vitro systems. I think we need to work harder on in vitro systems.

But I think those analyses have to go hand‑in‑hand with appropriate in vivo systems. It is not a linear progression. In our lab, we do in vivo studies very early on, and we develop different in vivo models depending on the stringency and the difficulty. But we can learn a lot from those pre and early in vivo studies to then reengineer and improve our in vitro assays.

With that, I'd like to acknowledge my collaborators. I have a fantastic group of former ‑‑ which are the guys in gray ‑‑ and current students and postdocs and all my collaborators. Thank you.

DR. PLANT: Thank you, Andres. Questions?

DR. YILDIRIM: Thank you, Dr. Garcia. It's so very informative talk.

DR. PLANT: Could you introduce yourself, please?

DR. YILDIRIM: I'm Eda Yildirum from Drexel University. So I have a couple of questions about your method.

Did you ever try solid tissue on 3D? Did you characterize on 3D?

DR. GARCIA: For 3D, we have tried some studies using the centrifugation method, and we've done some flow perfusion to counteract ‑‑ as you ramp up the level of the flow rate through it.

I have to say there's significant technical challenges because the architecture of the scaffold, as most of you would expect, will have a significant impact on the force that you're applying to the cell.

So what is ‑‑ in one particular scaffold, let's say, if you set your flow rate at 1 milliliter per minute, that can give rise to very large different forces for scaffolds with different pore sizes or different pore architectures.

The methods are applicable to three dimensions. But then the additional complexity of the scaffolds have to be taken into account if you're going to compare across different scaffolds.

DR. YILDIRIM: I see. Another question is, so basically you introduce some ligands in order to improve cell adhesion. But have you ever looked at the material and the cell interface, and have you ever characterized the interface?

DR. GARCIA: Extensively for the systems that we're interested in. We've done a lot of work looking ‑‑ I presented the ones that we had the polymer brushes. We've also done a lot of analyses on surfaces that are exposed to either serum or plasma. And, again, using similar tools that I represented, dissect apart, which are the relevant receptors that are involved in that process.

I can tell you that there's a lot of dynamic responses that go along the way, and those are tricky things to do. But I think they're worth doing, and there's a lot of research from lab and a lot of other groups have looked at those systems.

DR. YILDIRIM: Okay. One more question. Sorry.

DR. PLANT: We're going to move on. If you don't mind, if you could talk with Dr. Garcia afterwards. So we can get two more really, really quick questions in.

One more quick question, that would be great. I'm trying to get back on time here, sorry.

MR. ROWE: So in the osteoblasts lineages, of course, a very heterogeneous lineage I would guess certainly by expression patterns, and I would guess by its cell surface pattern that a pre‑osteoblast, early parasite ‑‑ that's very early with the cell that has to expand in order to make bone ‑‑ is very, very different than a cell that is highly differentiated and now making a mineralized matrix.

So I guess the question would be, do you want to target your surface to be attractive to the pre‑osteoblasts that are going to expand and then make a matrix that will allow them to mineralize versus targeting something that you want to be days later but nothing, nowhere close to it as the day you put it in. How do ‑‑

DR. GARCIA: I think that's a good question. I think a couple things that I think is important that I point out is that I think that's going to depend on the particular cell model that you're going to look at. I'm going to come back to the osteoblast because we have looked at that.

In our hands and the cells that we're interested in, which are mostly the mesenchymal‑derived cells, A 5B 1 has been demonstrated to be a key pro‑differentiated receptor. There are many instances in which A 5B 1 provides inhibitory responses. I'm thinking of the keratinocytes, for example. So what works in our hands for a particular cell model may not work for a different cell model.

To answer your question about the ‑‑ for the osteoblasts, we did in vitro studies with blocking antibodies that we added at different time points in culture to see the relative contributions of different receptors throughout the process of differentiation.

What we found ‑‑ and this was surprising to us. So, again, the differentiation program in our cells is 14 days in culture with serum and some supplements. And what we did is we added blocking antibodies at different time periods in culture and then maintained them through the duration of the culture. And what we found was that when we added the antibodies early in the culture time, we completely block differentiation. And the longer we waited to add the blocking antibody, the less inhibition we have. In fact, if we added antibodies after Day 10 out of the 14, it was no effect in the differentiation of the system. And that, to us, shed a lot of light, suggesting that the signaling response comes very early in the process. And, again, it's a cascade and it's a program of differentiation. And those early responses translate over multiple days downstream.

I have to tell you that in my experience with the myoblasts, that was not the case. Over the differentiation program of the osteoblasts, their integrin expression profile is fairly constant. In the myoblast case, the integrin expression profile changes radically as they differentiate. And it appears that some receptors are critical early on, and they don't use them anymore to the later process.

So, again, some of these things are really dependent on the particular cell model that you're looking at. We're not doing studies with ligands that we can precisely control with a synthetic system when the ligand is presented and when we can take it away. And hopefully, that will allow us to answer some of those questions.

DR. PLANT: Thank you, Andres.

Okay. Our last for this morning is Dr. Michael Sacks. Dr. Sacks got his PhD in biomedical engineering from UT Southwest Medical Center in Dallas. He's served on the faculty of University of Miami in biomedical engineering and University of Pennsylvania also in bioengineering.

He is currently a professor in the Department of Bioengineering, University of Pittsburgh. In fact, he serves ‑‑ he's the William Kepler Whiteford professor in the school of engineering. And he is serving as director of engineered tissue mechanics laboratory for the McGowan Institute of Regenerative Medicine at the University of Pittsburgh. He is also a fellow of the American Institute for Medical and Biological Engineering and also an established investigator for the American Heart Association.

Dr. Sacks, thank you for being with us. Let me help you with this.

DR. SACKS: Okay. This is a nice view of Pittsburgh, what it looks like in one of three days a year when it's not cloudy or raining or snowing.

I want to first thank the organizers of this meeting for organizing what has been for me a very refreshing and in‑depth and very interesting educational workshop. I wasn't quite sure what was it going to end up panning out when they offered me a very kind invitation, and it's been really quite interesting.

I also understand that I'm the only thing standing between you and lunch, which is not a very good position to be in, only probably worse being between you and dinner. So I will try to be to the point and hopefully provide what I hope is ‑‑ as I worked on this talk. And I think what I can try to do is to give you guys a little bit of a link between the kind of macro functional kinds of things that people need to estimate and the kinds of more subtle and smaller‑scale cellular kinds of events that we've seen in many of the previous talks.

Now, I want to first clarify who coined the term tissue engineering. This was first coined by a preeminent biomechanicist, Y.C. Fung in '87, where he meant, really, in the sense of determining the biomechanic responses of cells and tissues in order to learn how to replace them. And I think the clinical need for this has been made very evident in this session.

Now, I want to give you guys a broader scope of what we mean by biomechanics. There's a tendency to think of biomechanics as Young's modulus, Poisson's ratio, and Instrons, and that's kind of where it ends. And I just want to get you guys to understand that biomechanics is what really many of us consider the in‑between or the middle name between structure and function.

I also want to underscore ‑‑ and I think the previous talk by Andres Garcia also shows how at a molecular level biomechanics plays a very important role. But it really goes in all different scales. And I think it's important to understand biomechanics as a trans‑scale science that means many things to many people. And I think it's a very nice way to approach it.

Now, one of the things that I think is most appropriate in terms of understanding biomechanics is this concept of what was coined by a workshop ‑‑ it's called functional tissue engineering. And this came out by an article published in JBME by Dave Butler, et al in 2000.

And I think one of the things that I've seen filtered through this meeting is people measuring lots of things. And I also get a lot of requests for measuring lots of things. And the question is well, what are you trying to measure. And I also ask that, well, what are you trying to learn. And so I think the concept of functional tissue engineering is basically trying to replace or restore a function which is pretty much while we're all here. And some of the questions are what are the thresholds of forces, stresses and strains to a normal tissue to withstand normal operation. This may appear to be a very been there, done it. But it turns out that there are many, many applications at many scales where we just don't have this kind of knowledge.

What are the mechanical properties during both normal and failure conditions? And what properties should be incorporated into the designs? And this last point I think is the most critical and the most difficult to answer.

So one of the things that we also have to learn is when developing implants and culture, how do mechanical factors regulate cell behavior as compared to those experienced in vivo? Again, those of us, particularly myself, who do a lot of tissue engineering tend to use cells as little black boxes and to produce things, and they're much more complex than that.

I think the second point is really very important, particularly for someone like myself, who spend much of their career trying to learn the structure‑strength relations in various tissue systems. Do we have to exactly reproduce every feature of a native tissue to get to acceptable levels of physiological function restoration? And the answer is no, but the question is well, then how much and in what way? In evaluating the pairs, how good is good enough? And I think for the clinicians in the audience, I think you deal with this on an almost daily basis.

In terms of what we need, we need to establish stress‑strain histories over the physiological range. And I need to emphasize that in today's laboratories, you can measure almost anything. And so the question becomes well, what do you measure? And the question as asked always has to be guided by physiological function, not necessarily by a particular mechanical theory.

Mechanical properties of native tissues must be established in the sub‑failure and failure states. A subset of these have to be selected and prioritized. That is, you're not going to be able to do everything. You got to figure out what's the most important for your particular application.

I think, again, just to reiterate, the standards must be set in evaluating repairs and replacement after surgery to determine how good is good enough. And these appear to be very simple questions, but in practice they're really quite challenging to answer.

Now, throughout this meeting, we've seen a lot of basic considerations. And what I'm going to be focused on primarily in today's talk is in the in vitro phase, although we have a number of different projects going where we look at in vivo.

In an in vitro situation, we look at primarily factors such as enhanced protein synthesis, tissue information, and strength and strategic use of mechanical and/or biomechanical simulation to enhance tissue formation.

And then, ultimately, when this is implanted in vivo, we look for invasive measures during explant and also noninvasive measures, usually with primarily image based. So this is kind of a framework which we like to try to start with.

In terms of what we look at in tissue engineering, most of it uses scaffolds, which we've heard quite a bit about. They're generally grouped into two major categories of biologically derived such as intestinal sub‑mucosa and urinary bladder matrix, decellularized tissue such as decellularized aortic valves. There could be gels made of a variety of bioproteins, not the least of which have been common to collagen, fibrin and glycosamino glycans and also electrospun biopolymers, which has been very common by several groups.

These synthetics offer another approach. There are wovens and fabrics and gels and foams, primarily the forms these are made in, non‑wovens, such as made from PGA and PLLA and also electrospun biodegradable polymers.

So you're dealing really from a biomechanical point of view with a tremendous spectrum of materials and structures that people have tried and developed over the years. And it's been kind of interesting for me. It's kind of ‑‑ I'm also very active in the Society for Biomaterials. And living between the two worlds has been quite interesting.

Mechanical behavior, I think when we look at everything that can be done and that can be measured, it's quite daunting. And those of us like myself who has a mechanics background ‑‑ my first two degrees are in applied mechanics ‑‑ are often theory driven. But I think as you learn to do biomedical science, you always have to be thinking of everything be driven from the physiological functional requirements.

So there are a variety of things that you can look at. For example, stress‑strain responses, which are non‑linear; rapid transitions synthesis; this makes computational modeling challenging. There are time dependence issues such as visco and poroelasticities; anisotropy, which can actually be very, very important, particularly in the valves, which I'll be presenting as an example today.

One of the real challenges, the dimensionality, in other words, what do you test and how do you do it? There's classic uniaxial techniques, which have been used for many years, largely in the musculoskeletal literature. Those are fine for tissues like tendons, ligaments whose physiological function are primary unidimensional in nature.

For those more planar tissues such as membranes or valves, you have planar approaches. And then there's solid material such as the myocardial wall or the urinary bladder wall, which are in full 3D. And to date, there is not one generalized, one accepted method for doing this. And so it's so trivial.

Major modes, we think not only in taking something and pulling and stretching it, but what is it subjected to? It's tension, compression, flexure, classic modes of deformation. Some tissues are predominantly one. Articular cartilage are primarily compressive. Others, such as meniscus and ligaments, are primarily in tension.

There's also scale, which I think is very critical. There's very, very localized properties, such as people use AFM or nano‑indentation, which can give you a great deal of information but on a very, very localized scale.

Larger scales, which are generally more relevant for a more macro physiological effect function and I think in biomechanics, one has to always be cognizant of linking measures of various scales to make sense of the cell and also larger‑scale physiological behaviors, and this is an interest of mine in particular.

So as an example, as I always tell writers, always write what you know. So I'm going to talk about what I know. I've been spending the last 15 years of my career looking at these little leaflets, which don't do much other than are just check valves, but they are really quite amazing functions. And I think underscore many of the things that I'm trying to talk about today.

Now, the valve is a simple check valve, which you can see here on the left. This is an aortic valve. It is made up of not a homogenous but a very complex tri‑layered architecture of dense collagen, glycosamino glycans and ventricularis, which is a collagen elastin composite all right in interstitial cells and coated by layers on each side by vascular endothelial cells, which have also been known to be slightly different from those found in the rest of the circulation.

So one of the things that you do is that people always ask me, well, what's the Young's modulus for a leaflet? And I teach undergraduate and graduate biomechanics courses. And one of the first things I tell ‑‑ particularly my undergraduate students ‑‑ is that if you ever tell me ‑‑ you leave this course with ‑‑ one thing you should learn is that never use the term "Young's modulus" for living tissues.

A really good example is the ‑‑ this is some work we did some years ago where we actually took leaflets from a valve. We stretch them under a plane of biaxial tension to emulate the physiological loading condition. And you end up producing a stress‑strain curve that looks something like here on the right, which in the radial direction ‑‑ that is in the direction of flow here ‑‑ you'll see kind of a classic J shape curve of soft tissues going from very compliant ‑‑ notice this is very common for soft tissues. You'll get 40 or 50 percent strain before significant loading.

You can talk about a stiffness in terms of the slope here, whereas in the other direction, the circumferential direction, we see a more rapid loading of a smaller strain level. But now the curve tilts backwards. And so the question is is that if you're talking about stiffness, you're going to compute a negative stiffness.

The way I like to summarize this for all of you is there's more to life than Young's modulus. And the reason for this is very straightforward. Remember, these tissues are made up of a limited number of available structural proteins. And these are made primarily of type 1 collagen, which will break at about 12 to 15 percent strain.

So the question you ask yourself is how does nature make tissues that go to 80 percent strain without failing. And nature does this not by a material property but in fact a structural property, which you see here.

This little diagram indicates a little box. This little cartoon here represents the circumferentially aligned collagen fibers. And as you increase the ratio of the forces in the rate of circumferential direction, you'll see the fibers actually rotate. Therefore, the material response as you measure at the bulk level is really due to these microstructural features. So one really cannot separate structure from material response.

In terms of tissue engineering, we've been collaborating with a group up in Boston for about the last decade. And they have been ‑‑ this is John Myers' group who has pioneered approaches for tissue‑engineered pulmonary valves. And most of you are probably familiar with this. They initially started with cells from an ovine source. They started with a carotid artery and moved on to other ones.

But the basic premise is the same. The cells are extracted, isolated, expanded, seeded onto a scaffold. It often fits with a condition in a bioreactor to produce a tri‑leaflet construct. And it turns out this is a very simple thing to do. And one of the things that I really enjoy about working with surgeons, as I often joke, surgeons are never quite ‑‑ are not always sure what they're doing but they're never in doubt.

And so John just did this and put ‑‑ implanted it and said it's going to work, and it did. And I said, well, this isn't going to work. It's going to turn into scar tissue. It's going to fall apart; except it turns into a functional tri‑leaflet valve in most cases in about 20 weeks and that really is quite challenging.

Now, in terms of the scaffolding they make things out of, these are what is known as non‑wovens. Those of you who haven't seen this before, they look like felts that you can get in a hardware store. And they may appear to be very simple materials, but actually, they're quite complex.

And those of you who are really interested in scaffold biomechanics, I suggest reading the textile literature because they've looked at this stuff for many years. You just have to deal with wiggy terminology like deniers per weft and things like that, but it makes sense if you get the definitions down.

They're made out of this small polybiodegradable polymers ‑‑ often was pioneered by the Langer Lab with Lisa Freed in particular. They're needle punched. They form these nice little domains about a millimeter inside. And within the domains, you can see these are fibrous networks. These fibers are about 10 to 15 microns in diameter and are extremely stiff, having a modulus in the neighborhood of 10 to 20 gigapascals.

One of the things that you want to say as well is how do you actually understand how tissue forms on what appears to be a very simple but are really quite complex. And one of the things we first looked at is cyclic flexure. This animation was to show the actual moving of a ‑‑ the idea was to show that valves open and close and dynamic. And one of the primary modes is flexure.

So we designed this little bioreactor to take strips ‑‑ and, again, the philosophy here was to try ‑‑ well, valves undergo very complex multi‑dimensional deformations. Let's try to break it down to its simple parts. And you can bend these little strips back and forth and see them, and look and use them to investigate tissue formation.

The physical configurations are actually very straightforward, simple three‑point bending that slides back and forth. And then from that, we can then also do three‑point bending tests and use Oiler Brunli (phonetic) relationship to back out an effective stiffness.

Now, you would think under such a simple little configuration, you won't see very much. It's interesting. But what we found was that when you look at these 3D reconstructions, if you just grow these materials in static culture, you get a predictable gradient in tissue formation due to the nutrient, particularly with the reduction in diffusion capacity of the materials that tissue grows.

Simply bending it back and forth at about 1 hertz, that only promotes more uniform tissue formation but much more ‑‑ and as I will show a little later in my talk ‑‑ a very rather interesting consequences in the quality of the ECM that's produced as well.

So, again, you're beginning to see that these things aren't necessarily so simple. We then developed a non‑woven structural model for these scaffolds and bending. And I won't go through all the details. The equations look a little busy, but they're really fairly straightforward.

The idea is to come up with a constitutive model for these, to understand how the scaffolds themselves work. And I think what's interesting here is what this little diagram represents. This little diagram represents the fibers. Now remember, the fibers are made up of 10 to 20 gigapascal fibers, and yet, the effective rigidity of these scaffolds at the beginning is only 200 kilopascals. What's the deal?

The deal is what's really causing this is not the properties of the fibers but the fibers are highly undulated, act like little springs. And when tissue forms, it increases the number of bond points. So if you can imagine this ‑‑ these little symbols represent the number of bonds. The little circles represent their connections. If you plot the ‑‑ according to the model, the inter‑fiber bond length versus the apparent total modulus, this thing ramps up quite considerably.

Now, if you don't understand this, then you're going to think, wow, I'm producing tissue that's megapascals in stiffness and you're really not. What you're doing is the tissue is actually binding the fibers together. And by increasing the bond points, you essentially go from spring‑like behavior to strut‑like behavior.

And so that's why understanding some of the micromechanical implications I think is very important. And you can undergo order of magnitude changes in stiffness from this effect.

You can also look at the directional differences. These felts, we found out, are actually slightly directional. This is the model if none of the fibers interact at all, and this is the model if all interact. And you can see they have almost ‑‑ a very profound stiffness. And we find that the models predict really accurately, that the actual scaffold is actually somewhere in between because there's a certain amount of frictional in inter‑contact with the scaffold fibers.

Now, the other scaffold we looked at, there's electrospinning. And I think to echo Andres' comment of 10,000 papers on cell adhesion, we see about 50,000 in electrospinning. I'm not sure if a 10‑year‑old can do it, but some people do it with about equal level of technical capability.

But the basic technique is very old. It's been around. The idea is that you have polymer in a solution with an electrostatic charge that actually works. In some ways, not all that different from the mass spec talk we saw earlier. And the little twist we put on it is we actually conform these on a rotating manual to induce alignment.

When I first saw these ‑‑ this is in collaboration with Bill Wagner at Pitt. You can actually see these fibers are about a micron in diameter. And this is what happens when you don't move the mandrel. And as you increase the mandrel speed, you can obviously see an obvious degree of alignment.

That looks quite interesting. And just for those of us who have ever looked at ‑‑ particularly at the cellularized tissues, you can see, well, these look kind of tissue‑like.

So but the real question is why are people so interested in these scaffolds. And there's all kinds of reasons. But for us, if we look at a scaffold from this polyester urethane urea that's spun at a 2300 RPM and compare that to a native pulmonary valve in both directions, you actually get a response that's not identical but very close.

So you can go from the non‑wovens, which are very stiff to electrospuns, which have properties of the tissue almost from the get‑go. In terms of mechanical analysis, if you look at mandrel speed versus the under boxal (phonetic) loading under both directions, you can see that the more you spin it, the stiffer it goes in the preferred direction, the more compliant in the other direction. I want to note, too, that if you notice in the non ‑‑ the cross‑preferred direction, the curves become more highly curved. And, again, this is a structural effect of the larger rotations that occur.

If you look here, if you look at the anistrophy ratio ‑‑ and that's simply the ratio of the more compliant direction over the stiffer direction ‑‑ as you increase the mandrel speed to about 2 meters per second, not a lot happens. But the tissue's almost ‑‑ the scaffold's almost perfectly anisotropic. All of a sudden at about 2 meters per second, it kicks up to about 1.3 and then will increase linearly.

So this part to me could be very attractive from a design ‑‑ you could almost dial in the anistrophy you want by the mandrel speed. The reason why you see this jump is mainly because the mandrel speed has to exceed the nozzle jet velocity. And when it does that, that's when you start getting alignment.

So we fit this to an earlier simpler version of a structural model that we developed for valve tissues some years back, and I won't bother you with all the details. But essentially, what you have here is a composite model of the affected fiber properties, the orientation of the fibers. And this is a kinematic term space on the experimentally measured strains.

And it turns out it fits the tissue very well or the scaffold very well. And this is under different stretch protocols. So what this suggests is that at a bulk scale, the fibers ‑‑ these electrospun scaffolds act like long fiber composites. And this is very important because many other types of scaffolds that are made from gels have some soft tissue‑like properties, but they tend to be short fibers.

What's interesting, too, with this model is that one of the things, as I mentioned before, you want to say, well, what's the intrinsic fiber properties, not just the bulk properties. And the model can back out, for example, the effect of fiber stress‑strain curve. And you can see here the fibers become increasingly stiff due to increasing mandrel speed. And this is in part due to the enhanced crystallization that we've measured.

So from these kinds of models and these kinds of experiments, you can actually obtain true polymer fiber moduli versus the ‑‑ and that's very important. Separate structural effects, orientation from changes in intrinsic properties, allow derivation of true moduli independent structural features. Some practical uses could be guided scaffold design as well in vitro conditioning regimes, which we're working on currently.

Now, the other side of this is structure. And structure is very complicated and very much dependent on what you're trying to measure. I'm going to focus this talk primarily on fibrous architectures because for those of us interested in tissue biomechanics, this dictates bulk properties.

Also, I'll talk in a little bit later, not too much later because we're all hungry, but local cellular deformations. And we're looking at ‑‑ I'm going to review a few techniques we've used. This is not intended to be an advertisement. I have no commercial interest in any techniques. There are many, many ways to do it. But I think the idea is to give you a flavor of what you can measure.

Then I also will be talking a little bit later which I think the very, very critical cellular deformations.

So one technique we've used in my lab for quite a while is a scattered light. I think Peter Lelkes also mentioned ‑‑ and some other people have used this, related techniques. It's an intrinsically very simple technique. It's based on high school physics, the fact that a single fiber acts like a diffraction grading, produces the Bragg, classic Bragg scattering.

But in a tissue with thousands of fibers, you end up getting continuous distribution. And if you go back and analyze that scattering distribution, you can back out the fiber. Now, the key point here is that both the fiber and the gaps between the fibers act like effective slits. So it's not just the ‑‑ it's both just the fiber damage themselves as well as the spaces between the fibers.

You'll produce scattering patterns that look like this. This is for an aortic valve. And to analyze the data, you go around, pick off the intensity. And what you end up backing out after normalization is an orientation distribution of the scaffold fibers directly. Very fast, very rapid, very inexpensive, and also exquisitely sensitive.

A good example of this is some what we call ‑‑ these are tie dye Jockey brief figures. This is the aortic valve at no pressure with high alignment in the upper cooptation region, almost no random. And just after application of about 20, 30 millimeters of mercury, you see it become highly aligned.

And this is directly due to ‑‑ you can see here the increased alignment of the collagen network. Again, for such a simple technique, the amount of information you back out is really quite rich. We've used this technique quite a bit.

Intestinal sub‑mucosa, looking at the ‑‑ actually, detecting two layers of orientation have been reported by Orberg and others showing different cases.

Using directly for engineered materials, this is a piece of Dermagraft where we can actually separate out the scattering effects of the woven mesh from that of the deposited collagen. And we were actually able to back out the orientation of the collagen from that of the surrounding mesh.

So you can actually do this, and it's possible to do this in a real‑time basis. Turns out that the scattering technique is also very good for scaffolds like the non‑wovens which I just mentioned to you a little while ago. And we've actually used this for ‑‑ after determining the orientation information from that, you can actually get out the standard deviation and fit it to a normalized calcium distribution.

More recent work I've done with Dave Butler at the University of Cincinnati and this is looking at collagen gels seeded with bone marrow mesenchymal stem cells, and looking at some very basic things like what happens if you make it the same way. This is a cell's result from a shorter sample versus a longer. And actually, the length ‑‑ something as simple as the length of a construct you can see can induce a much higher degree of alignment than you see here. So, again, they're a very useful little gadget.

Now, when you're getting down to non‑transparent materials like electrospuns, things get a little bit more complicated. And the other thing you need to realize is that what appears to function as a long fiber composite on a more micro scale actually will behave very differently.

So, again, we did this by developing ‑‑ looked at this by filling up this little stretch stage we put in a SEM device to look at both large scale and small scale features. And those of you who have ever worked with image processing analysis, there's a variety of ways to do this. But you can actually track fibers. We have some software we developed to get orientation distributions.

Every time I look at this figure on the right, I'm always thinking of the map of Pittsburgh of roadway and it's about as organized as that. And you can actually get ‑‑ pick out tortuosity and look at a lot of structural features.

It turns out this is a very useful technique, fairly rapid. And as we increase the speed of the mandrel, we can actually see a higher degree of alignment from nearly random here in black to a higher degree. And so it works pretty well.

It's also very good at looking for things like structural uniformity, how uniform is your scaffold. And you can see that here using this from a larger‑scale image.

If you're actually trying to get cell‑based imaging, living imaging, you can apply this not to SEM but actually to laser confocal scanning microscopy images. What you see here on the left is the same scaffold. These polymer fibers are auto‑fluorescent; the same one under planar biaxial stretching how much straightened they are.

You can investigate things. This is an orientation distribution. For example, when I stretch a scaffold and release it, will it return back to its same configuration? We were able to show that. And what happens when we stretch it in various directions and look at how the whole thing acts in network?

I'm just going to skip over that slide.

Now, on a more local level, particularly if you're interested in cellular responses, these things get a little bit more complicated. So what we do is we have actually gone back and looked at intersection points to look at how the strain field ‑‑ and if you take a look here on the local level, the micro‑architectures of these materials look actually rather complicated.

This plot here shows the under 40 micron by 40 micron ‑‑ 40 by 50 micron window, you can see here this is a result in strains, major principal strain in different directions of stretch. And what you can see here is that if there's ‑‑ if you're looking at cell responses on these scaffolds, they're really subjected not to homogenous deformation but locally quite a large range, anywhere from about 4 percent here to about 14 here going down to about 5 percent here to about 20 percent here. So, again, if you're trying to predict cellular level responses on these scaffolds, you need to understand how the cells are actually being deformed.

So now that's all scaffold‑based work. That's basically looking at fairly inert materials. We're much more interested in the effects of tissue formation, the physical simulation effects and the effects of things like scaffold degradation, mass changes, surface versus bulk erosion and stress transfer considerations. And the more deeply you get into this, you realize that this is really quite interesting.

This slide here, for me, is kind of what I consider my kind of working framework. And, again, I think this is all driven not by the scaffold but by the constituent cell population and how their controlled cell deformation coupled to the appropriate loading weight form and other factors, such as number of cycles in addition to growth factors.

This mysterious black box that we've seen a couple talks trying to determine what that is. Obviously, there's probably several Nobel Prize winning ‑‑ Nobel Prizes buried within this simple thing ‑‑ outcoming phenotypic changes, biosynthetic levels, robust ECM information, scaffold degradation.

Looks very simple but this is actually quite, I think, one of the real major challenges for us in tissue engineering.

To give you a little bit of a background, I think it's important to go back and look at the native tissue. And this has been studied particularly in our lab using ‑‑ on vavular tissues by Farsh Gelack (phonetic) and Van Mau (phonetic) on cartilage and many other ones.

I think one of the things we try to look at is bulk cellular effects using nuclear aspect ratio. So, again, I want you to realize that within ‑‑ even within dense connective tissues, tissues you don't normally consider to be, where cells play a very important role.

This is an example of the aortic valve cells. And these little ‑‑ this is stained for the aortic valve nuclei. And you can see as you gradually increase pressure, those nuclei go from being nearly spherical to being cigar‑like. And this goes from zero to 90 millimeters mercury in about 15 milliseconds. So there's a ‑‑ tremendous large amounts of deformation that happens very rapidly. And you go from 90 back to zero when the valve opens again. So the cells within these tissues are in a very highly dynamic kinematic environment.

Looking at nuclear aspect ration, it turns up ‑‑ we've done relations between the stretch of valve interstitial cells from zero to 20 percent and look at the relation between cell and nuclear aspect ratio. And the correlation's actually pretty good. So it's a reasonably good measure, at least in these cells of overall nuclear ‑‑ of cellular deformations.

What you get is actually quite interesting. This scale here on the X axis represents the relative degree of collagen alignment from those light scattering tie dyed Jockey brief plots I showed you earlier. And this is the nuclear aspect ratio.

Now, if this was a simple homogenous sheet of material, this would be a straight line and that wouldn't be very interesting. But what really happens in a native tissue is that when you go from very low degrees of loading, you get a lot of alignment in the collagen network, not a lot happens to the cells. Then when the collagen fibers lock up, the ‑‑ then you keep adding stress, you'll find that the nuclear aspect ratio, the cells really become compressed.

A little cartoon here on the right kind of shows a very simpleminded diagram, how we think this happens. And essentially, you go from straightening where everything happens to the network, not much to the gross cell deformation, and then down here on the lower right what happens during compression. So this is the kind of macro environment or maso scale environment valve and many other connective tissues are in.

In terms of mechanical simulation, we can take strips of tissues and put them in some kind of bioreactors as I've showed you before. These cells are highly dynamic. If you don't do anything ‑‑ by the way, you get a drop‑off in ‑‑ this HSP47, by the way, is a collagen chaperone gene. And collagen chaperone, it's a good indicator of collagen production.

Now, one of the interesting things this raises is what's your control. And most of us think control is just static, don't do anything. But these tissues are normally kept in a dynamic environment, and you can actually see a predictive drop in collagen production if you just let them sit there.

If you add TGF beta or if you just stretch them, you can get some regulation. But if you do both at the same time, both add TGF beta and tension, you get a over threefold increase in response. So, again, you need to think about not only strain but the biochemical environment and then coupled together.

Now, what's interesting here as well is if you're in a situation where cells are overloaded, you would think what effect that has. And this is a similar study we completed. And what should happen is that when you go up to 30 percent strain, this is the static. This is the 10 percent strain which keeps things about normal. And if you overstrain them, 30 percent strain, you can actually see a decrease in their collagen type 1 production.

So that environment's actually very critical. If we go back to our tissues, the ones I showed you before under that simple flexure experiment, the question is then how do we translate this understanding and knowledge to form an engineered tissue.

I won't go into this in detail. This has all been published. But one of the things we found when we look at normalized cell distributions in these static versus flexed, we see a more uniform distribution of cells and we see upregulation of DNA, collagen and sulfonated GAGs pretty much in any stage any time you stimulate it.

So the question's well, what are we seeing? How much of this is due to the scaffold changes? How much is due to the tissue? Are there differences in the quality of these tissues?

I need to just underscore that we're dealing with these highly complex composite materials. You can see here the native scaffold in 3D reconstructed, the scaffold with the tissue being formed, and a cross‑section showing how the tissue forms on the scaffold itself.

So we develop a non‑woven maso scale model. There's an old saying, by the way, that for every equation you show, you lose half your audience. So by the time I'm done, I have one‑half of one person left, I think. But just bear with me.

The idea here is that we can actually measure and back out the effect of stiffness of the extracellular matrix in the context of being deposited on the scaffold using a relatively simple model. And how you quantify these things, you actually can look at the ‑‑ use Picker series red (inaudible) in sections. This is a scaffold fiber remnant. Here's openings where there was scaffolds before.

We can actually measure the collagen concentration using assays and then measure the distribution across the thickness using fluorescent microscopy and then correlate that to an actual collagen content.

Now, one of the classic problems is, well, if there's tissues forming, how does that affect the scaffold? And we came up with this really interesting idea of using pan gels over a range of stiffnesses from about zero to 6 ‑‑ 800 pascals and then see how that ultimately reinforces the scaffold. And the point here is that not only can you do that, it turns out a pretty linear relationship. It does not follow the rule of mixtures because of, again, the micro mechanics of these scaffolds are not so trivial.

Anyway, bottom line, you put this all together and what you find is that when you ‑‑ static versus flux, the effect of stiffness of the collagen, or the ECM is what we probably more accurately call it, is there you get a change in the mechanical response not only because you have more of it, but you actually find that in the flux there's about a 20 percent in the ‑‑ oh, excuse me, about a 10 percent increase in the effect of stiffness of the ECM due to the actual flux. So there's an improvement of a quality doing this. And this plot here shows the estimated distribution of the stiffness as a function of depth of this tissue.

Now, in looking at cell micro‑integrated scaffolds, in electrospun ‑‑ in wovens, this is relatively straightforward because they're very porous. They're about 97 percent porous. Electrospuns are more dense, more challenging to deal with.

The Wagner Lab has developed a technique where we can actually electrospray cells onto a mandrel at the same time the fiber's being spun. We then take these integrated scaffolds and put them on a little stretcher, which we place under a confocal scope and look at cellular reformations.

You see things like this where ‑‑ these are DAPI stained. This is ‑‑ didn't come out too well. And you can also use the autofluorescent on the scaffolds. You can stretch them. You can see cellular deformations. And this is one of the few instances where you can actually ‑‑ how you can get cells into these electrospun scaffolds.

Now, when you look at the deformation, these are the reconstruction of the nuclei, of the cells. We can then look at their orientations within the scaffold as we stretch it in real time. This represents an orientation of all the nuclei. And as you stretch in the direction, you can see the nuclei actually line up.

But it takes a very large amount of strain, about 50 percent strain to do this.

Now, most interestingly, when you look at the relation between strain and nuclear aspect ration, again, it's not linear. But it's very different from what we observe in the aortic valve. The aortic valve, remember not a lot happened, and then all of a sudden everything happened. In the electrospun, it's the exact opposite. You find most of the stuff happens at lower strains. And by the time you get up to higher strain levels, you've pretty much flattened out. You've maximized your response.

So if you're looking at this from a brute force technique like I take a material and I stretch it a certain amount, I'll get a certain output. If I stretch it double the amount, I'll get double the output. It doesn't work that way. And that's because on a micro mechanical level, cells deform locally in these electrospun scaffolds and not so much by the compaction that we saw in the native tissues but by straightening of the local fibers.

Once the fibers are pretty much all straightened, that's it. So from a micro mechanical point of view, you can maximize it out. And I think this is the kind of knowledge that's very critical for physical conditioning as well as understanding the system as a whole.

So just to kind of wrap up, what do you do next once ‑‑ if you can get all this knowledge, what do you do with it? And the question is scale up. And it's mainly looking at how local cellular deformations need to be controlled in a macro level and an attack valve and need to balance the need for controlled biomechanical assimilated with the other design requirements.

We've developed this bioreactor system where we can take candidate materials, make them into a tri‑leaflet system, and put it into a controlled sterile environment for multiple weeks. And it turns out that this is important to do because if you look at something simple as a collagen production in different regions of the leaflet, from the stent down to the free edge, you can see that you can actually get a gradient of collagen production even in something like this because of the local micro mechanics that the cells are subjected to within these systems.

You can also do modeling of these scaffolds to show what happens. This is if the fibers are isotropic using that same system. If it's you trying to use electrospin scaffolds, if you introduce anistrophy, you can do some more uniform. And presumably, you'd get a more uniform tissue production in this case as well.

So I'm going ‑‑ issues and future trends. Lots of techniques and approaches, I've hit on a few of them today. What's the correct approach? And my philosophy's always driven by a functional understanding of the application.

There's more to life than Young's modulus. What do you measure? I think you need to understand that you're not dealing with simple materials. You don't need a PhD in materials science to do it, but you have to be informed.

One of the other kind of issues that you deal with on a practical level is that biomechanical studies usually require large specimens. I always envy pathologists. They can take little biopsies, and life is nice. We need these big, comparatively large numbers of bulky specimens. And if you're doing in vivo work, for example, that's very expensive.

So there's also a cost benefit. There's a lot of variability. One of the ways we try to deal with that is we're developing a macro biaxial device for very, very small assessments, anywhere from 1 to 3 square millimeters. And this is an interesting challenge. This all fits on a microscope stage. And the idea here is that you can actually take biopsy scale specimens and perform very delicate mechanical studies.

We also need for non‑destructive and simultaneous cell tissue imaging during incubation and in vivo development. Optical methods are the ones of choice. Ultrasound and MRI have improved. So you need multi‑modal; a need for standardization of approaches and a while back I was involved with an ASTM effort to try to do this. I think it kind of didn't really play out because people realized that we're not ‑‑ we don't even know what questions to ask, let alone how to answer them and codify them. There's a need for low‑cost high throughput physiologically meaningful experiments. And I think this is where the role of the commercial sector can be pivotal.

I want to acknowledge that, of course, like everybody else, I don't actually do anything except write grants and sign purchase orders. So the people that actually did this are all my great students: George Engelmayr, Dan Hildebrand, Dave Merryman, Todd Courtney, David Schmidt, John Stella, John Stankus (phonetic), Nick Amaroso (phonetic) and Chad Eckert, research faculty, my collaborators in Boston and Pittsburgh and elsewhere who have been just wonderful, very generous funding from NHLBI. And also several of the people listed up there are on a biomechanics training grant, NIBIB‑funded training grant entitled Biomechanics and Regenerative Medicine.

I want to thank you for your time and hope your stomach isn't growling too much.

DR. PLANT: Thank you, Michael. That was great.

I think in the interest of time, since we're running way behind, unless there is one like really, really quick, really, really pressing question, that I'm tempted to say grab Michael now before he leaves. And we'll break for lunch and meet back here at 1:55, and try to start on time very promptly. Thank you.

(Whereupon, a lunch recess was taken.)

A F T E R N O O N   S E S S I O N

DR. PLANT: Good afternoon. We're going to reconvene here to have our last talk of the day. And at the end of this talk, there will be a short additional sort of roundtable discussion with all of the participants of this morning in case there are additional questions that are global or of speakers who have come before.

But right now, we're going to hear from John Elliott from NIST.

John is a project leader and research scientist at the National Institute of Standards and Technology and in the cell and tissue measurements group. His projects are focused on the development and testing of quality control metrics for cell culture and new measurement techniques for assessing differentiation in mesenchymal stem cells. He received his PhD in physiology and biophysics at SUNY Stony Brook in 1999, and he's been at an NRC postdoc prior to becoming a staff member.

So his talk today is on considerations for quality control of in vitro cell cultures.

DR. ELLIOTT: Okay. No problem, great. Thanks.

I also would like to thank the organizers for giving us a chance to present some of this work.

I'm going to talk about quality control in cell culture. And what that means to us and especially to people at NIST who ‑‑ NIST has a history of thinking about some of the science and measurement issues, especially in biology, maybe a little differently than people who don't get the fun of focusing in measurement sciences.

So the NIST mission, just to give you kind of an overview of NIST first, is to promote US innovation, industrial competitiveness by advancing measurement science, standards and technology. So it's a non‑regulatory organization. But really what we're focusing on is how do we improve ‑‑ can ‑‑ is there a need for better measurements in a particular area and how can we add value to this arena by adding some measurement scientists?

So there's a lot of expertise in physical and chemical sciences at NIST and also in measurement infrastructures, in standards and reference materials and how those are used to improve measurements.

So kind of the three areas that I think about ‑‑ this is my NIST point of view, too. It certainly isn't necessarily what everybody at NIST does. But this is my point of view as a measurement scientist.

One of the roles is facilitating measurements. So this can come in forms of consensus standards and standard reference materials. If you think about in the world of tissue culture, let's say, a standard reference material might be some kind of material that everybody can buy and grow cells on and somehow it'd be identical every time that they had that. That doesn't exist. I'm just giving an example of some kind of standard reference material. But consensus standards and NIST is very good at organizing people together to come up with an agreement of a standard and that could be a procedure or some other details that would ‑‑ part of measurements.

So new measurement techniques is another aspect that NIST has a lot of expertise in and that's applying both because of their physical and chemical expertise in the past. They can take a lot of the new nano science tools that are for physical and chemistry measurements and begin applying those to biological systems, which several people here from NIST work in that area. And there's some very exciting new things for three‑dimensional scaffold imaging and other things like that.

Then finally, how to extract new information from existing data. So you can take advantage of models ‑‑ and we'll actually see a little bit of that in my presentation ‑‑ models and statistics to really take advantage of information that might be in existing data and that's particular in distribution data.

As we've talked about today, cells have heterogeneity, and what kind of things can we learn from heterogeneity. And although it may cause confusion in some cases, it also gives us sometimes better ways to be able to analyze statistics and also some information about how cells are behaving.

So we focus on measurement infrastructure, and these are the things I'm going to talk about in particular with cell culture.

How robust is this measurement? Robustness means can anybody do it. Yeah, how robust is it to changing conditions, various things? Is the data high quality? For example, distribution data has a quality to it that's more than just a mean value of a measurement.

What are the best ways to represent the data? Does every laboratory get the same answer? That particularly, to me, is an interesting question. And that is how do we get two laboratories that have the same cell culture get the same measurement? And whether we're talking a morphology measurement or a cell volume measurement or immunofluorescence measurement, what kinds of tools do people need to get the same measurement in two different laboratories? So those are important considerations. And also, example, what's the best statistical method for detecting differences in a response?

So at NIST, kind of the framework that we've developed in our lab and very much to kind of think about cells in a measurement science, is this idea of the cell being a meter. And the cell takes input signals, various nutrients, growth factors and matrices, integrates it into a ‑‑ integrates the input signals into big signaling pathways, a large number of signaling pathways. And then a cell's status occurs. Some kind of program is initiated whether it's proliferation. And we can detect that by biomarkers.

Well, we can kind of box that into a black box and just call it a meter. And it has input signals and output signals. And in the measurement paradigm, it's difficult to make measurements with the meter unless you know the meter is working correctly.

So that's the kind of the framework to think about quality control in cell culture. Is it possible or what is it going to take to make measurements of cells that are in culture, whether that's a cell line or whether there's stem cells to ensure that they ‑‑ at least to have some kind of confidence that they are behaving similar to what they were last time.

And, again, as you see at the top here, we're basically asking the questions are the cells behaving as expected before we use them.

Now, cell culture has a particular role in biology. And basically, it's the process of keeping cells alive during some kind of in vitro or ex vivo conditions. And usually, what that's used for is expanding cell number, which we've heard a lot of stories about that, or at least a lot of discussions about that in the last couple days, and also maybe for getting cells ready for a cellular assay.

So cell culture may not ‑‑ metrics to do quality control on cell culture may just look at issues of the biological functions that are involved in expanding cell numbers such as growth and division parameters.

Now, and that doesn't necessarily mean that the cell is going to behave perfectly in all applications afterwards. But the idea with this quality control is do we have some kind of metric to at least give us confidence that things are working as we expect them to.

That's a challenge because ex vivo or the in vitro conditions is really an artificial environment. There's a variety of things that can vary, such as incubators and things that may happen in incubators; temperature changes, for example, carbon dioxide, pH, extracellular matrices that the cells grow on; the nature of tissue culture; polystyrene, the serums that cells are cultured in; passaging, freezing and thawing. With all those changes possible for cells, I think that the idea of having metrics to ensure that cell culture is working right is is important to consider.

So basically, what I'm going to talk about is let's talk about these quality control metrics and what kinds of things we can think about. So from the NIST point of view, or at least from my point of view, what kind of measurements are good candidates for quality control?

As I said, we want robust and routine measurements. When I say "routine," I don't necessarily mean easy but things that can be done routinely with cell culture. And we'll see examples of what I'm talking about, some of the measurements coming up.

The next one will be calibrated or traceable. And what that really means is that the measurement provides ideally an absolute number. So can we get a measurement in micron squared area, for example, or some kind of absolute measurement that's very easily transferable to another laboratory and if they calibrate their microscope. For example, if they use the proper reference materials, they'll get the same answers. So that calibrated and traceable is really a nice feature to have, which I think is a challenge in many of the biological measurements.

Measurements linked to a cell process. We do want that, the metric to report on cell culture conditions, let's say, and also generate some high quality data so we can really do good statistical analysis to be able to detect when the cell culture has changed from a previously known condition.

So what I'm going to talk about is two candidates that we've been thinking about. And these are measurements we've thought a lot about in the laboratory and have some of the properties that I talk about here. This is going to be cell volume measurements and cell spreading and morphology measurements.

They're not as ‑‑ well, I think they're exciting. But they may not appear as exciting as immunofluorescence, where antibodies that detect a particular protein are used in the assay. But I think what we'll find out, by thinking about what is involved in these measurements, that we really are looking at cellular processes, and we can really get great measurements by what's going on with them.

So it's important to think about the origin of a cellular response. whether it's a protein that's being over‑expressed or a change in cell morphology, and that all these responses are really linked to signaling pathways. The change in cell morphology is a result of a lot of various factors coming into a cell and then getting integrated. And the cell has a corresponding morphology.

So with cell volume and cell spreading, which I show down here, what some of the things we have connected to cell volume can be connected to cell cycle and cell growth, at least I'll show you in a minute. Cell spreading has cell cycle and cell growth. So it's really related to cell volume plus there's a cell adhesion component to that.

So by using these measurements of cell volume and cell spreading, the function, which is kind of a structural measurement ‑‑ really we're assessing a number of signaling pathways that are involved in cells but not directly as maybe an antibody or some kind of particular functional assay would.

Now, as we talked about earlier, we should expect a distribution of responses, anything that we measure may have a ‑‑ we're not going to get a single value for all of the cells in the population. And this is an example of a single clonal population. We're looking at cell shape here in genes. And this is a ‑‑ this clonal population has been trans‑infected with a tenascin promoter driven GFP.

These are unsynchronized. So we have an unsynchronized single clonal population of cells growing here. And if you just look at the morphology, obviously, a whole bunch of different shapes of cells. And if you look at the gene activation ‑‑ if you look at the tenascin, the green fluorescent protein activation, you can see there's a number of cases where cells are not expressing green fluorescent protein and other cases where they're expressing high levels of green fluorescent protein and all the ranges in between.

So we want the measurements to really ‑‑ we want to take advantage of being able to measure that distribution because that is part of the signaling pathway that this culture ‑‑ that this population of cells is undergoing. It has this range of responses.

So that's an important consideration here. And we've thought about it in the lab. And if you look up at ‑‑ this is now ‑‑ we're going to switch to cell volume data. But this is very similar. Cell volume, as I said, cell sizes can have a range of different sizes and you see kind of what may look like a log normal galcion, we can just call it that for right now. Basically it's kind of a skewed gaussian. It's got a variety of different cell. We've got cell ‑‑ this is kind of a fraction of cells that have that size. So we see all kinds of different size distribution of sizes for the cells.

But more importantly, passage after passage after passage, we see the same distribution of sizes. And these cells, of course, are ‑‑ that means they are growing through proliferation and then half of them are ‑‑ nine‑tenths of them are getting thrown away and the 10 percent is getting continued to be propagated.

We see that with NIH3T3 fibroblasts and these A10 muscle cells, that these are different distributions. And so that these may be characteristic of some of the properties of the cells. And I think we're going see that in the future.

So our laboratory's focused on what is the origin of this distribution. And with some work with a really super talented graduate student, Dr. Halter, he ‑‑ what we thought about is, well, let's come up with a simple model where the origin of the distribution comes from. And we basically took cell growth and modeled it as simple as this. The cell has to basically grow to about twice its size on average and then divide in half, and then each of those cells do the exact same thing.

We're just going to give it a ‑‑ so a little bit of noise in the cell cycle time in that a cell maybe has 22 hours plus or minus a 20 percent CV on when it divides and also a little bit of noise in the growth rate, in that not every cell has the same exact growth rate that they pick from an average and with some noise in that growth rate.

So we had really reduced it to two processes. One is a growth process, and one is cell cycle time. Well, cell cycle time, we know there's a lot of signaling pathways for that. For the change in volume rate, it's really ‑‑ it's very likely related to the change in mass inside the ‑‑ the dry mass inside of a cell if we assume that the density in a cell is constant.

Really, what we're thinking about is kind of the average protein expression rate inside of a cell is really this cell growth rate. So the idea is we're going to link the volume to two processes, two parameterized processes and their noise. And at the bottom, you can see ‑‑ this is just an example of ‑‑ this is an example of a cell ‑‑ a single cell going through a couple different divisions cycles with some different rates and the time between division cycle.

When we run that simulation in the computer, we end up with a distribution that looks ‑‑ that has this kind of log normal I'm going to call it for right now ‑‑ this ‑‑ not having a better description ‑‑ kind of shape. And this is scaled a little differently than what we saw previously. But what you'll see is that this is going to fit right on top of the data.

Now, what's nice about the simulation is we can now change some of these parameters and look at how that distribution's going to change, and it's relatively straightforward how it changes. Basically, the whole distribution, the mean values you want to think about, will shift to the right if we increase the growth rate, which means the cells take on more volume per unit time or increase cell cycle time, and then the cells hang around longer. And assuming that they're continuing to grow at the same rate, they should get larger on average per cell cycle time. And that would get smaller if we have decreases in the growth rate and cell cycle time.

We've also then looked at the noise factors. How would the noise parameters in cell cycle time if you don't ‑‑ if you have more noise or less noise. Well, it will change the slopes on the edges of this distribution. It changes the shape a little bit.

So we kind of have some ideas of how that will change that. And so now we can look at whether our model system really predicts what happens with cells, and just to get to validate that our model is representative of what can happen in cells.

So what you're seeing at the very top here is actually a fit over the top of the volume distribution data that I showed earlier. The black line is the fit of the ‑‑ well, actually the simulation, one of the things I wanted to say is that Michael Halter ‑‑ I never really get formulas to put on my screens. But he actually is a very talented mathematician and came up with this amazing analytical description of what a population of asynchronous cells continuously growing and dividing should look like. And so that's our measured population.

Then we can use fitting procedures, basically. If we know what the cell cycle time is, we can fit and get the growth rate and get the noise parameters that are involved in that, that allows to fit that distribution.

So there's an interesting experiment. We can use aphidicolin, which is a DNA polymerase inhibitor. And so what it does is if you use it at a very low concentration, it slows down the synthesis phase of cell cycle, basically extending out that cell cycle. And so if you see, we changed the cell cycle from these A10 cells from 29 hours to 50 hours with increasing concentrations of aphidicolin.

What our model would predict is if we don't change the growth rate when we add this drug, basically by extending the cell cycle time, the cells should grow to a little bit larger volume. And indeed, we see that. And that's shown in the three concentrations that are shown here. We see that the cell volume has gotten larger.

Actually, if we go down to the model, you can see the rates ‑‑ so the rates are relatively constant, 144, 126, and I think there may be a small drop in the rate at least calculated. But the noises in that rate process are relatively constant.

So now the volume measurement, which is a measurement that has been around for a long time and it's very easy to make during every passage in that there's a device, an electronic sizing device ‑‑ it's called a culture counter classically from the guy who invented it ‑‑ that allows you to make these kinds of measurements. People generally use it for counting cells. And the volume data is not really looked at.

But the volume data can provide some important insight into some of these signaling processes.

So we can now look at volume and understand when we have changes in volume, we can kind of begin to think about what could be happening inside of the cell. Here's some examples. These are new serum and old serum batches and just looking at the response of 3T3s over several days after replating.

What we see is some situations basically ‑‑ and this is the original serum we were using ‑‑ after 24 hours, which is this dark blue line. The cell volume has actually increased. And so in our model, what that kind of means is that the cells have begun growing. So they're gaining mass.

But they're not cycling yet because what cycling does is divide that mass in half into two smaller masses. And so right now, it looks like there's a lag time and the cells are beginning to grow and not dividing in the new serum. But after approximately 24 hours or a little bit time later, the cells are returning to the expected size indicated kind of by that black line.

Anyway, we can see changes in the way cells behave in different serums. But surprisingly, after 96 hours, these different ‑‑ these are all 10 percent serum from different manufacturers or at least two of them from different manufacturers. We see that the cell size distributions ‑‑ and this is triplicates from all three at 96 hours, fall right on top of each other.

So by the time the cells have come to equilibrium after a number of days, the cell volumes are back on track independent of the serum. So that's an interesting ‑‑ it's an interesting finding about the cell cycle and the growth rates of these cells.

I'm going to show you another example, which is totally fascinating, is this idea of varying serum concentrations. So we varied serum concentrations from 1 percent to 10 percent. Now, with A10 cells, this slows down ‑‑ 1 percent, this slows down the cell cycle by probably a factor of 4 or ‑‑ yeah, factor of 4.

So they are very, very slow growing. And so what I showed you previously is that when the cells slowed down, they continued to grow and they got bigger. Well, interestingly enough here, we've slowed the cells down tremendously. But when we look at the cell volumes, they're identical to each other. So the only way that would be possible in the model is that if the cell cycle time has increased, then the growth rate must have decreased tremendously, which is not surprising in low serum concentrations because they may not have the growth factors to induce nutrient intake, and that may slow down the protein process. And then it looks like the cell cycle time slows down to accommodate that.

But it is amazing the cells can get to the same volume even though they are growing at different cell cycle times. So even though the volume measurement can give you very important information, but you do have to think about it and that you have to take what you expect to ‑‑ you look at what's happening, maybe take the cell mean generation time with the volume data to really understand what's going on. And with some of the equations and some of the things we've talked about, you can quantify some of these issues. So you can really provide some quantitative metrics about a culture.

Now, this is work I'm doing with Steve ‑‑ our group is doing with Steve Bauer at FDA. And that was really the task of what kind of metrics can we use to quality control a mesenchymal stem cell culture. And this was an interesting challenge to us as we generally work with immortalized cells and to move to cells that are no longer mortalized and are primary.

I'm just going to show you some examples of how the volume measurements kind of reflect what is happening in the culture. If you look at this, male MSC line ‑‑ this is from the NIH‑funded Tulane Center for Genetic Therapy; I believe that's the name of it. They have a male and female MSC line that's available to researchers. At early passage, we can look at the volume distributions themselves. And that is shown here.

That is shown here. And we see at least with the first three, four and five passages, that the cell shape is relatively ‑‑ the distribution of volumes is relatively the same for the male lines.

Based on our model, that tends to ‑‑ that really makes us believe that really what is happening there is we have a population of cells that are continuously cycling, and they're remaining asynchronous. At least the majority of them are remaining asynchronous. And they are going through this process of cell growth and division.

Now that's interesting because you might think that there's heterogeneity there. But really, we've modeled the idea off that we have a single cell that has a mean growth rate and a mean cell cycle time with some variability around it, possibly due to stochastic processes inside of a cell.

So even though there is this heterogeneity here, it's the result of a single population, at least in our model. So that's important to consider. After the fifth passage and look at the sixth passage, then the interesting thing that happens with these cells, they begin to age or they show a large amount of aging. And what we have here is this formation of larger cells. And you can see that by having the number of cells that have larger volumes start to increase over time.

So with the female line that was done at the same time ‑‑ and this isn't really to mean that there's any difference between male and female. It's just two different populations of cells that are from different donors ‑‑ we see that at day ‑‑ at passage three, we have a distribution, which is somewhat wider than the distributions for the male cell lines.

So the cells have a bigger size. But very quickly, they're starting to go into a state at which the cells are starting to get bigger. And based on our model again, that's kind of probably happening because the cells are starting to slow down and cell cycle. But surprisingly, they're still gaining mass and they're still ‑‑ they're continuing to make mass. But they're dividing more slowly. So we see this increase in cell size.

So volume measurements are very interesting in that they provide some metrics; if you can kind of understand it through this growth and cell cycle time model, you can understand what's happening to your culture.

Now, what occurs with that volume and those cells that I showed you with the ‑‑ the MSCs with the larger volume is shown here. And this is something that anybody who's worked with MSCs has probably seen before. At early passage ‑‑ and these are cells that are nearly fresh out of the patient ‑‑ you have a population of cells that are very small. They divide on the order of 15‑hour division cycle. So they divide often, and they remain small.

They look great. They look like a homogenous population to some extent. When you have late passage and if you haven't treated them well, which is basically seeding them at high densities and allowing them to proliferate, here's what the morphology looks like. These cells are ‑‑ there's a population of monster cells in there that are very large.

Again, this would be the cell line that I would ‑‑ this would be the cell type that I would say has stopped cycling but continued to gain mass after the stopping of cycling. So that volume information is interesting, but what the volume information really translates into is that ‑‑ well, what it also aids in understanding is morphology information. Those larger cells and volume distributions are ‑‑ I'll tell you, are these larger cells in the morphology distribution. But morphology also has an extra factor and that is dealing with cells adhering to the substrate. So I want to talk about ‑‑ I talked about volume, and that's going to be an important measurement. I want to talk about morphology. But the volume measurement is an important factor for morphology because volume is one of the major indicators of whether a cell ‑‑ one of the major indicators of how big a cell spreads out to on a particular material.

So morphology measurements are very interesting in that they are also traceable and that it's possible to get two laboratories to get the same answer because there are existing standards for that. So that is an advantage to using morphology measurements. And it's also a cell‑by‑cell technique. At least with new microscopes you can get information on individual cells and plot that as a distribution. So it provides a great dataset.

So we take advantage of automated microscopy, and that doesn't necessarily have to be the way you do it. But the advantage of automated microscopy is the unbiased data sampling. In a sample, we can look at hundreds of frames ‑‑ of different frames on a single sample with very little human bias on which frames you're choosing.

We take advantage of the different fluorophores. So we have a technique developed in laboratory with a two‑color system. And basically, it's a very bright whole cell stain and a nuclear stain. And they use automated microscopy to look at those cells.

You see, here's a cell edged ‑‑ there's a picture of a cell edged stain and the nuclear stain. By using both of those, we can count how many nuclei are in a single object that's detected by a cell stain. And so that gives us the ability to discriminate between cells touching each other, cell‑cell contact and individual cells that aren't touching each other.

Just as an example of what comes out of the polystyrene dish with fibroblasts with nine different areas imaged, you see that there's a high level of reproducibility in the distribution of morphologies. And where we don't see the complete overlap of the distributions, that's going to give us some insight into the uncertainty of that measurement. And that this is for all purposes a very, very ‑‑ I mean, in experimental conditions, as homogenous as we're going to get if we have the cells grown in different areas on the polystyrene dish, tissue cultured polystyrene. For right now, let's say that's the best we can get.

Well, looking at that overlap experimentally with the microscopy measurement, that blurry, or those overlaps, or those histograms give us an idea of the uncertainty in the measurement.

Now, here is an example of two cells. These are cells that were obtained from the same population. So they had the same volume distribution, put onto two different plastic substrates from different manufacturers and the morphology was measured. So what you see there is that the substrate ‑‑ it's very important to have a good substrate if you're going to use morphology as a quality control characteristic.

So I just wanted to bring that up, that in addition to having a volume kind of idea that your volume measurements are good, if you're going to compare cell metric to cell metric and you're going to do morphology, too, you're going to need to make sure you have the substrate well controlled.

I'm going to quickly go through some of the things that we've thought about. And that's basically the idea of reference materials to do these kinds of measurements on. They can provide some insight to this stuff.

Here's the final result. If you do have a well‑known material, we can look at cells a year apart and overlay their morphologies. And now the idea of coming up with a metric, a specification for the quality for the culture and how it should be behaved is possible, or at least we can now consider that. So we've controlled a lot of features.

I'm part of this ASTM cell signaling subsection, and so we put together a standard test method for measuring morphology as a document that's been submitted for ‑‑ start to begin thinking about that.

Real quickly, I want to go through that. It's amazing, a very robust technique. And we've made the measurements a lot of times. But because it uses image analysis, it uses fluorescence microscopy. There's fixing and staining. There's two‑color fluorescence microscopy. There's just a lot of issues in there that standards, or at least ideas being standards, are required to make sure that people get the same measurements in two laboratories. And so it's an interesting challenge for biology to get measurements equivalent in different places. And so that's all I wanted to say, I guess, about it.

Thank you.

DR. PLANT: Do we have time for a couple of questions for John?

DR. PARENTEAU: I just want to make a comment that I really appreciate that you mentioned that it's important to understand that your process ran as you expected it to. And that's going to be so important ‑‑ that's so important in doing these types of products, the biological end of it. The easier our measurements can be and understanding that those processes run as expected are really good. We heard many times during the last two days how everything's so variable. Everything's so variable. And in the end, though, it has to be predictable and controlled.

So I really enjoyed hearing what you came up with, simple but elegant.

DR. ELLIOTT: Thanks. Yes, heterogeneity ‑‑ and this doesn't negate real heterogeneity, which are different cell types. But some of the heterogeneity may be a single population of cells experiencing natural biological behavior. So it's kind of really a single population of cells.

DR. PLANT: So if there are no more specific questions for John, then we'll just go into the next phase of this session, which is to open the floor for general comments and discussion regarding this session in general, what we've heard from our various speakers. And I think we have almost all of our speakers here.

Oh, good, Lani, you are still here ‑‑ all of our speakers here.

And I guess that I could start by posing a couple of questions to the audience to see if there's any resonance to this. We've heard a number of different things. I think that some of the things presented, particularly earlier today, were maybe a little different from traditional tools used in system ‑‑ in tissue engineering.

And I personally would be very interested to hear from the audience if ‑‑ what your thoughts were about the idea of trying to employ some of the tools of systems biology for the purpose of discovery of biomarkers or for understanding the cellular mechanisms.

I guess specifically one question that was asked up here after somebody's talk was whether you could get to the same result using a much simpler assay. In other words, if you were able to, for example, predict toxicology with an MTT assay, why would you want to do a whole screen of a large number of different parameters and collect all of these data, and then go through the data analysis of a large dataset in order to come up with an answer that was better ‑‑ and I guess maybe the question is how much better.

How much better would the answer have to be in order to make the extra work worthwhile? So I would put that question out there for discussion.

But any other questions that anybody has, please feel free to step up to the microscope ‑‑ to the microphone and start the discussion. Unless you're really, really small and then you can step up to the microscope.

UNKNOWN QUESTIONER: Let me take this question then from a different vantage point. Some of the talks this morning really came up with heavy machinery in terms of calculation powers, in terms of mass spectroscopy and in terms of high throughput screening.

Now, the question to NIST, of course, this in establishing these parameters requires all the ‑‑ maybe may require all this heavy machinery. But question to FDA, how practical is it to expect for every object of every product to have to go through this? And who has all this available ‑‑ all these facilities available to do all these tests if they become standards?

DR. WITTEN: I'll just make a general comment from FDA's point of view about what we are thinking and what we've learned in the last day and a half, which is really that our intention is to have some insights from the scientific community about tools that are available and questions that should be asked during product evaluation for these kinds of products.

And we certainly have heard a number of comments on that. And also the message, I think, has also been clear that some of these tools that are important for product assessment are also important in terms of feedback for product development.

But I don't think at the moment what we would plan to do with this information is to make a list of these sophisticated tests that we expect people to apply. What we really wanted and I think that what we've gained to a large extent is some insight into the types of questions that are appropriate and the types of answers that we might hope to gain.

So it's still going to be ultimately in the arena of the product developer to figure out how to answer those questions to best kind of characterize their own product. So I think these insights have been very helpful to us.

DR. PARENTEAU: Can I just take Anne's question a little bit?

MTT, I think that the assay systems that we saw this morning can be added value if they get at the mechanisms of cytotoxicity. Because MTT, what is that measuring? All it's measuring is mitochondrial activity.

So if the methods ‑‑ and I'd like to hear from the speaker ‑‑ but if the methods, for example, could get at ‑‑ the propensity to go towards apoptotic events or something of that nature where MTT was involved and get at more deeply into the mechanism of cytotoxicity for these drugs, I would think that's where the big value is for those assays.

DR. GIULIANO: So I agree with you. But I can add, too, that, yeah, the mechanism, that's something that comes along with it and it's very important. But what we're also ‑‑ because our main customer that we deal with, pharma, that it's such a high stake that any improvement that they get over these simple assays, they're willing to take.

For example, if it's going to take you a billion dollars to get a drug to market, pretty much everything has to be a blockbuster. So anything that they can ‑‑ any improvement on predictivity of toxicity assays, especially in vitro before they have to go to even expensive clinical ‑‑ I mean expensive animal studies.

Just raising that ‑‑ like that one table I showed, where we went from 80 percent up to 100 percent predictivity is beneficial to them. So just making the assay as simple as possible but still giving them enough information to reduce the risk considerably.

DR. PARENTEAU: But drug development is 800 million plus. One in five make it through. So if you can provide them a way to better their stakes, that is worth platinum rather than silver.

DR. GIULIANO: I agree.

DR. PARENTEAU: So that's what I'm saying, go for the platinum here because that's what pharma needs. That's what they really want. They may buy your 20 percent improvement and maybe a former pharma can horn on this, but I think that they'll really buy something that really gives them an idea of improving that one in five in the end. And that's what they really need.

DR. BERTRAM: Tim Bertram, Tengion. I unfortunately bear the banner of former pharma. So I will not bear it proudly, but I will make a comment on this, having spent a number of years trying to address the question that Anne actually put forward.

One thing that I would suggest ‑‑ and I don't know who you're getting your guidance from in pharma. But replacement of the animal models is probably never going to occur. That is an untenable objective.

However, to Nancy's point which is really quite tenable, in fact, one in five is the best odds. If you look at the portfolios of most of pharma, one in five is the absolutely best project success that they're going to have. Most of their projects run in the one in 25 to one in 50.

So the way that you can help them ‑‑ and I think to address, too, Anne's question ‑‑ what I find the in vitro test doing is improving the likelihood that a product will be put into that very expensive development process. And that is where ‑‑ as Nancy said, that's the platinum.

The issues that were frequently found was that the in vitro assays were invariably ‑‑ and it's almost a ‑‑ it's where the selection pressure begins to show itself because the ability to make myriads of compounds through common imperial chemistry or what have you allows the selective pressure to be designed around.

What happens then is you bang into something else. And I think what is missing in ‑‑ and the animal then tells you that. I mean you think you've got it, and then when you put it into an animal, you kill half of them or whatever you do.

I think that the unique aspect ‑‑ and it came up in some of the talks ‑‑ would be to set up some kind of a matrix, which actually doesn't answer the same question but continues to answer around the edges of potential known mechanisms because by doing that, therein lies the probability of getting to one in five or getting to improving those odds, which will ultimately drive down the costs because most of the cost in pharma is incurred ‑‑ is suffered from toxicity.

Of the one in 50 attritions, 95 percent of them are toxicity. And they will never get a chance to either be dosed adequately to see efficacy or they show some unexpected metabolite, what have you, side effect that was not, quote, "appropriate." But where am I going with this?

DR. PLANT: But pharma notwithstanding.

DR. BERTRAM: Sorry.

DR. PLANT: But pharma notwithstanding, of course, we're here to talk about tissue engineering. Do you want to ‑‑

DR. BERTRAM: Right. Sorry, I got way off. You pulled me into my past and many memories coming out.

I think one of the thing's that quite powerful ‑‑ and I was struck with the talks today. What I like about the in vitro methods that were presented today is the new and novel insights that the animal can't tell me.

You were able to give mechanistic understanding ‑‑ and I'm going to point to Andres' talk and to Michael Sacks' talk that really jumped out at me as is getting those insights in a simpler system than I can then test in that animal model are incredibly powerful. And I think that's not too different from pharma, but I think that's where the tissue engineering element is of great benefit and would really encourage continued focus on that.

That's all I really wanted to say. Nancy, you brought back memories.

MR. Ratcliff: Tony Ratcliff of Synthasome.

The systems biology presentation's extremely exciting. Immense amount of data being processed, being used effectively, productively, leading to more knowledge, better mechanisms of understanding and better product. That's great.

Bringing it to tissue engineering, that's where I felt we had a little bit of a ‑‑ I had anyway a conceptual block. And I felt that if we take this type of technology, which seems to have been developed in principal for pharma and that kind of pathway, then I think we're going to end up with some problems when we apply it directly to tissue engineering or if we try and make applicable to tissue engineering.

What I felt it did was it showed the potential of that type of approach. Systems biology has this potential to have massive impact. But I think what we have to from a tissue engineering point of view is to step back, strip it down and develop it specifically for tissue engineering.

If we do that, then I think we'll be productive. If we try and force fit it into tissue engineering from where it is now, I suspect ‑‑ I don't know, but I suspect we'll end up with some problems. And since I live in tissue engineering, that isn't something I want to happen.

So what I suspect we could recommend from that point of view is it pushes that type of systems biology as it applies to tissue engineering. I think it gets pushed back into the research mode where we ask how can we use systems biology, whatever it may be, to educate us in terms of tissue engineering from a research point of view and also understand how we might use it from a product development and product characterization point of view. But I think we have to strip it down to its basics and develop it specifically for tissue engineering, not a force fit so that it's common.

DR. TUAN: I just got to comment on what Tony just said. I agree. I think we need to strip it down to some essentials or I think somebody used transition proteins or whatever, markers.

I think one of the key things about tissue engineering or to engineer a tissue is to get a product that has the intrinsic biological responsiveness of the tissue that we want to make.

That being the case, I think we should put it on to all our analytical procedures, another additional variable, which is hit it with something that we know that that material ‑‑ that tissue is going to do normally, be it a hormone, be it mechanical loading or growth factor, what have you, and look at those variables; because I think then if you look at those changes, look at the differential changes, I think you can get a lot more out of just straight characterization, collecting all the data and ‑‑ but try to ‑‑ I think I would just throw that in the equation.

MS. SEAVER: Hi, Sally Seaver. I would say that compared to what I used to hear five or seven years ago, we have made some great strides in getting the two halves to work together in the sense that we are getting engineers now who really are beginning to appreciate the cell biology and cell biologists who realize that engineering is very, very important in all of this.

But I think my take‑home message from this conference is that we really need a heck of a lot more basic science. And we really need to think very much on what particular aspects of a particular therapy because it's going to change for different cells and different therapies for a particular indication, because even the same cells for the different indication are going to change are important.

I would disagree with this tox screening model. Most small molecules ‑‑ what they were talking about in tox screening is for small, small molecules. I can tell you from working in biotech land most biotech products are ‑‑ i.e. recombinant proteins, et cetera, et cetera, gene therapy, cell therapy ‑‑ fail because of a lack of efficacy not because of a toxicity. People force things into phase 3 trials that have no business being there. Now, in this field, I'm not so sure that some of the failures weren't not understanding how to scale up and manufacturing because you looked at some initial results that were very, very positive.

The other thing that I don't think people took into account, and still aren't taking into account, is if you have a disease such as diabetes and you simply replace it with more islet cells, what stops them from getting destroyed? That's the additional complication here.

Finally, a word of hope. Yes, we heard a lot of expensive techniques this morning, but I can tell you that mass spec is a regular release assay that's used all the time for biotherapeutics. And if it starts to be used, there are contract research organizations who own mass specs. So you don't have to own mass specs, you send the sample out.

So if any of these really neat techniques we were discussing this morning or yesterday really become useful to this group, there will be contract research organizations who are willing to buy machines to run your sample, and it will be possible to validate them in a ICH manner.

MR. ROWE: So, David Rowe again. So I want to try to make the case again of how one could use reporters of various kinds that are designed to reflect a variable intercellular ‑‑ a cell sensed event, be it apoptosis or be it cell division, or be it differentiation or activation of a kinase, or whatever the things that we're trying to make ‑‑ that if we're able to do that and hook it to a signal that is visually measurable or measurable in magnetic field, or whatever we want to do, that we would have a way that you could measure that event, be it in primary cells, be it in a engineered construct, be it in an intact animal, be it in any setting at all.

So, to me, for this to have ‑‑ and to hook it up to reporters that are easy to measure, that anybody could measure, and that if we had a warehouse of the events that we wanted to score for and sent away for them and it came as primary cells, it came as a mouse, it came as a virus, it came as whatever we wanted to do. To me, it's so important to catalog, have a way to warehouse and distribute the reporter mechanism that you would want to use in whatever system you have. And that ‑‑ it could be applied in many, many, many different settings.

To me, this is an area that I would love to see more discussion on. To me, that is where this is going to have to go. Because they can be built for any ‑‑ you can measure any darn thing you want. You can build a reporter and a sensing system to do that.

DR. PLANT: You're talking about genetically engineered reporter cell ‑‑

MR. ROWE: Well, yeah. I mean some gene that will change its expression because the environment that it is in has changed, and that environment is associated with the event that you want to measure.

DR. PLANT: I'm curious. Is that a concept that resonates with many others in the audience?

MS. SEAVER: I hate to be the institutional memory. But I believe when targeted genetics did this with a reporter gene and gene therapy, be careful. The gene A and B was cleared out very quickly for innate immune reactions.

MR. ROWE: I'm not talking about that.

MS. SEAVER: But you said in animals. I'm all for you in various cell lines. But when you go into the animal, watch out for the ‑‑

MR. ROWE: No, that's transgenic mice. That's not an issue. That's not an issue. This is ‑‑ we don't want to deal with that one. I mean I understand that problem.

I mean we're making transgenic ‑‑ not me. There's a company that's putting a reporter in every gene in every ‑‑ there's already a project going on in NIH to target every gene in the mouse genome. It's already being done, and they're putting in reporters into those places. They're already available.

DR. PLANT: Andres, did you have something to add to that?

MS. BARRON: Hi.

DR. PLANT: One moment, please. I just wanted to get a comment from Andres on this last ‑‑

DR. GARCIA: I was just going to say that in theory that sounds great. I think reduction to practice is where you hit a lot of technical roadblocks.

And early on, in my lab, we developed some systems to look at gene expression by luciferase constructs that we can amplify the signal. And it wasn't worth the effort. And it was just too hard, and we saw some drifts from the primary cells to the cells that we selected to express these constructs. And we found differences in transductional efficiency between different cells and different batches. And at the end of the day, it was much easier for us to do the standard real‑time PCR than to mess with this. Now, I am sure that there are places that that can be applied. But I think you need to proceed with being careful.

DR. PLANT: Let's go on to another topic, if we could.

MS. BARRON: Yes, I would like to move on this because I do ‑‑ my name is Majina Fasha Barron (phonetic). I think that all the necessary techniques that we were learning today are very useful for developing tissue engineering products, constructs.

However, I don't see any much point of using them and the point of actually releasing them into a clinic for actually treating the patients. The reason why all these techniques take a very long time ‑‑ and probably appreciate the living tissue engineering constructs are actually very short‑lived ‑‑ they have a very short shelf life, and we cannot afford basically to do extensive long‑term time taking experimental quality assurance on this products if they take long time because they will expire and they will be never available.

DR. MARTIN: I just wanted to add to what you're saying. I think I would echo what the gentleman who spoke of the neobladder said yesterday, which is you want a real‑time assay. It sounds like really that's what you want for every aspect of your QC.

And, again, if the if, if, if, if ‑‑ if we had a real‑time assay for our media and our cells and the host, everything would definitely be better.

I think I would agree with what was said over here that, yes, there is basic science that needs to be done to sort of help develop these tools for you. And personally, from the outside, I don't really know how you mesh the needs of quarterly reports and milestones that one might be subject to in developing the product with let's sort of study this ad nauseam to try and figure out what is the real ‑‑ the best assay or best reporter.

MR. DALEY: Mike Daley, Tigenics. I feel like the carpenter that just got an unlimited access to all of Sears' tools department. I want to build a house or I want to build a building. I'm not exactly sure what that building looks like, but I have a pretty good idea. But I need an architect to give me a design that I can execute that's going to be meaningful at the end of the day so I have a house that will stand up, has structural integrity, and is going to be useful.

Anybody that's gone through construction obviously, that's a reiterative process. You never come out the first time the way you want it. And unfortunately, I think that that's the observation. I feel like I'm the carpenter, and I said, oh, this is exciting. Just like when I used to always accumulate those Craftsman tools, most of them are still sitting in my garage, and I just need that transition.

So therefore, maybe the message here is we need a follow‑up here to a translational type of workshop that takes the tools that we've learned in these last two days, which I agree are exciting and mind provoking and great but doesn't allow us maybe to take the next step to how do we make them useful to a patient. So I would encourage the FDA to do that.

My second observation is we're trying to come up with maybe guidelines and maybe the expectations were out to lunch for myself. And let me summarize it in this way.

We're talking about products that we learned today that are sometimes single cells, sometimes multiple cells, sometimes multiple cells put on matrices, sometimes growing into organ systems that I still don't understand how we're ever going to get back into the body, sometimes on matrices that are synthetic, sometimes that they're biologically based, sometimes we have a lot of experience with how these matrices work in the body and the cells, sometimes we have no information.

We're trying to come up with tools to assess them that are common in nature that we can all feel comfortable with going forward. My hat's off to the FDA to try to sort this out because the problem is infinitely complex than even the questions we're trying to address here with these particular tools.

So I think it's also important ‑‑ and I just want to reiterate it's very important to keep an open dialogue because this is just as tissue is always maturing and morphing into different things. It's the bane of the regulatory process. It is always changing.

I'd like to say that today it's the same that it was six months from now so that I can do a project, but I know it's going to ‑‑ so it's important to keep this dialogue.

So I would just encourage the agencies ‑‑ this is a great interactive group ‑‑ to keep the momentum up of bringing it to the next level so I can build my house and then sue the contractor. But I mean ‑‑ thank you.

DR. PLANT: Okay. I guess we'd like to thank everybody for their participation today. And obviously, in this continuously evolving process, we will have to revisit a lot of the things that we talked today and hopefully, there will be progress.

Celia Witten now will say a few words and introduce our wrap‑up speaker.

DR. WITTEN: Yes, thank you, Anne.

So it's my pleasure to introduce Dr. Nerem, who's going to serve as rapporteur for this workshop. And as you all heard yesterday from Dr. Durfor when he gave introductory remarks, Dr. Nerem was asked to focus on three things, which are to highlight some of the scientific issues that have been brought up during the last day and a half and also to give his assessment of the scope of the output for this workshop and the scope of a future possible workshop that we could consider.

Dr. Nerem is the director of the Georgia Tech Emory Center for the Engineering of Living Tissues and has served on the Parker Petit Institute for bioengineering and bioscience at Georgia Tech since 1995.

He's been active in bioengineering for more than 25 years. In 1981, he established a cell culture laboratory and began to examine the influence of physical forces on anchorage‑dependent mammalian cells with significant focus on the cells which make up a blood vessel.

This work lead to his interest in tissue engineering. He was elected to the National Academy of Engineering in 1988, to the Institute of Medicine in 1992. He was the founding president of the American Institute of Medical and Biological Engineering.

He served on the science board of the Food and Drug Administration from 2000 to 2003 and serves part‑time as the senior advisor for bioengineering at the NIH's newest institute, the National Institute of Biomedical Imaging and Bioengineering.

Dr. Nerem.

DR. NEREM: Well, thanks, Celia. I appreciate this opportunity. I came here to learn myself. I actually thought I was invited to be the raconteur, not the rapporteur. But I do have some comments which might stimulate other comments from the audience.

Of course, this workshop's been on cells and scaffolds and specifically in vitro analyses of cell/scaffold products. Celia's already indicated what I was asked to do. I think we've really a number of excellent presentations. I'm really impressed by the quality of these last two days. And I'm hoping that we'll have these talks up on a website. Do you think that's possible or not, Celia?

DR. WITTEN: Yes, I was going to say we're going to see about putting slides and whether we can make available a recording for this. We have everyone's e‑mail addresses. So we'll send out a note about what we're going to do to make those available.

DR. NEREM: I was going to start with some general comments. This is billed as a tissue engineered workshop, and you can argue about what the definition of tissue engineering is. For many of us who originally used it as a very broad term, it's become more a term for replacement, and the regenerative medicine term has become the term that includes replacement but also repair and regeneration.

Even in repair and regeneration, there's a role for cell/scaffold products. And so one of the things one has to think about is the purpose of a cell/scaffold combination to create a replacement tissue or is it a scaffold that's being a delivery vehicle for cells in a cell therapy situation.

Related to that, of course, is the therapy, one of cell replacement, or are simply trying to introduce what I would call a biological factory; and then there's the whole issue of are we talking about a single cell type or are talking about multiple cell types. Clearly, tissues are made up of multiple cell types. But even in a cell therapy situation, you may want multiple cell types because of the cooperative nature of the effect that you were hoping will take place.

Of course, when we talk about cells in addition to the issue of single cells versus multiple cell types, is there a desired state of the differentiation phenotype? It's not clear that we always want to introduce fully differentiated cells.

I was glad to have John Elliott's talk as part of this program because I think mode of culture can be very important. One of my colleagues, Barbara Boyan, who at least some people in the audience know, is very interested in sex differences in the context of, in her case, male chondrocytes being very different from female chondrocytes.

There's an example out of the cardiovascular arena where skeletal myoblasts were being used for myocardial repair where the culture conditions for the male skeletal myoblasts were very different than the culture conditions they had to use for female skeletal myoblasts. So I think we should not forget about the fact that in spite of affirmative action and equal opportunity there are differences between males and females.

We also ‑‑ somebody early on ‑‑ I can't remember who it was ‑‑ but someone ‑‑ it might have even been Buddy Ratner talked ‑‑ about the reductionist attitude of many in the biology community. And certainly, much has been gained from the reductionist approach.

Having said that, I think we all recognize that cell function is really orchestrated by a symphony of signals. And when you're putting together a cell/scaffold combination, you become very much ‑‑ you very much recognize that; in fact, you are dealing with the symphony of signals and somehow you have to sort that out.

I learned a lot about in vitro analysis of cells, through Buddy Ratner's excellent presentation ‑‑ how to characterize biomaterials and scaffolds. But the question that still remains and which I think has to be front and center is how can we do an in vitro analysis of cells in a cell/scaffold combination.

I think there are some things that have been presented today that represent a potential for what we might be able to do in the future. But as Tony Ratcliff indicated in his comments, it's not going to be simply taking those tools and applying them in tissue engineering.

We're going to have to be ‑‑ and I think maybe it was ‑‑ it might have been Ken in his ‑‑ but one of the speakers, maybe Ken ‑‑ had said we're going to have be ‑‑ you guys are going to have to be creative in the way you figure out how to apply what we're doing. Maybe that was Dan, I can't remember. But it was one of our speakers.

But I think we all recognize that cells in 3D have very different characteristics from cells in 2D. And that has to be recognized has you move from the components of your product to actually a cell/scaffold product, which invariable will be a three‑dimensional product.

I think it was Dr. Benton who talked about the issues of remodeling. Once a cell/scaffold product is implanted, there will be remodeling. This being the case and considering that the remodeling will be dependent on the host and thus variable, what is the appropriate in vitro end point, or is the appropriate endpoint not an in vitro end point but an in vivo endpoint?

Again, considering the remodeling, if it's in vivo endpoint, when do you actually take that endpoint? And some said yesterday you really need in vitro assays and in vivo assays to work together, to basically work hand‑in‑hand. And I think that's going to be critical for a lot of reasons but certainly because of this remodeling issue.

Not too much was said about immunology, but if one is not to use the autologous cells ‑‑ and I think in many cases the route to success will be with non‑autologous cells ‑‑ then one has to worry about immunology.

Again, I enjoyed Buddy Ratner's overview. And I was glad that my colleague Andres Garcia talked this morning about cell adhesion because that's really the interface between the cells and the scaffold and something that we need to learn more about.

I want to come back to this whole issue of cells and 3D constructs because we really have to figure out how to do that, and that's going to take a lot of basic research. Clearly, we need to evaluate the scaffolds. We need to evaluate the cells prior to putting them in the scaffolds. But we need to evaluate the cell/scaffold combination and the interaction between cells and the scaffolds.

And I think there's a lot of basic research that needs to be done, which really leads me to make a few comments about what I'll call the interdisciplinary team.

Those of us in the tissue engineering community, we sense ‑‑ at least many of my colleagues agree with me that there has not been enough basic biologists working on the issues of tissue engineering. In addition to engineers and clinicians and chemists and so forth, we need basic biologists, including developmental biologists.

Rocky Tuan said we can borrow from developmental biology. Certainly, there is much that we can learn from developmental biology. And there actually was a keystone symposium back in April that brought the tissue engineering community together with the developmental biology community for the first time really in terms of an organized meeting. And, of course, I think we've seen today that there's much that physical scientists and mathematicians can also contribute.

Someone once said it takes a whole village to grow a child. Well, in this case the child is tissue engineering. The child is cell/scaffold products. And it's going to take a whole village of people with different expertise to grow the field.

So what are my suggestions in terms of possible topics for our next workshop? I can see a whole series of workshops. As much as I've learned in the past 36 hours about biomarkers, I think there's much more that could be done to go further into that field of biomarkers.

I think a workshop focused on in vitro analyses of 3D tissue‑engineered constructs might be very important. Another idea that came to mind in talking to people at this conference is a workshop on the success, or lack thereof, of in vitro analysis being predictive of in vivo performance. Now, some would say, well, why do we even talk about not preclinical performance, let's talk about human performance. I would first start with can in vitro analysis predict performance in an animal model and then go beyond that to actual performance in the human.

There is a workshop that could be done on manufacturing processes. And, for example, the use of biomarkers as a quality control measure in the manufacturing process.

And I think it was Nancy Parenteau who talked about we ought to try to learn what we can from the past. Yes, the 1990s were exciting. There was a lot going on. There was a lot of hope. There was also, I would say, a lot of hype, Nancy, but these were learning experiences.

I can't remember the exact quote, but someone once said something about either you learn from history or you're bound to repeat it. So there are some things that can be learned from the pioneering efforts of companies like Organogenesis, Advanced Tissue Sciences, Genzyme with its carticel, and so forth.

So finally, I was asked what should FDA do as a follow‑up to this workshop. Certainly, there could be follow‑on workshops. I would hope that some kind of a position paper might come out of this workshop to be written by FDA or FDA and NIST together.

I'm also thinking ‑‑ and I'd be interested in FDA reaction to this, but I think there needs to be the development of a guidance document. Now, maybe we're not quite there yet to do a guidance document. But if one begins to think about the outline of a guidance document in this area of cell/scaffold products, it might help structure one's thinking about what the next workshop should be because I think the goal ought to be some kind of a guidance document.

Finally, as my friends at FDA ‑‑ and I think I still have friends at FDA ‑‑ know, I keep on bringing up the issue of the regulatory pathway for combination products in general and tissue‑engineered products in particular.

I think the FDA has an amazing group of people who really are doing what they can to try to address the issues that are being presented by these new technologies. It's not at all clear to me that the regulatory pathways that exist are the right ones for ultimately bringing cell/scaffold products to commercialization.

In spite of the fact that I recognize all the constraints imposed on FDA by congressional legislation and so forth, I would like to see more thinking out of the box as to what the right regulatory pathway is when you think about the complexities of the kinds of products that we've talked about in the last two days.

Thanks, and certainly, I'd be glad to answer questions. But more importantly, to hear suggestions from others here in the audience. Thank you.

DR. PARENTEAU: I'd like to make just one comment as a cell and developmental biologist who's been in tissue engineering for about 20, I guess ‑‑ is that I get a sense that people are going to be overwhelmed sometimes with the information that they can gain either through the new methods just like they thought oh, proteomics was going to give it to us, or genomics, or what have you.

Really, I'm afraid that people are going to lose their way. And to the person's comment about building a house, the houses are already built. They're built during development, and I'm a firm believer. And they're built ‑‑ and they attempt to be rebuilt during disease processes. And they're rebuilt in us in a small way every day.

So if you use a reductionist view of biology and think of it as a biological process, you can probably only have two hands, if not just one hand, to get down to basic biological processes that have a certain activity. And all of nature is based on those things.

So if you can understand some of those things ‑‑ because that's how I would do it today if I were doing it again and I've worked on scaffolds to the skin construct to an islet cell, it can really help you ‑‑ it can help ground you.

That's the only thing I want to say because I'm fearful that people will get so overwhelmed with all the different things. And because the cell behavior in a scaffold is going to relate somehow to what they're trying to do, and all you do then is then try to understand.

Yes, there's adaptation in vitro. But it's adaptation. And you just have to be able to ‑‑ well, "just" is kind of the wrong word maybe. But you just have to understand if it's sort of like a fibrotic response. Is it a regenerative response or something? And that can help steer you in the right direction.

DR. WITTEN: I'd like to thank you, Dr. Nerem. I'd also like to thank the other speakers, the members of the audience. But in particular, I'd like to thank the organizing committee and Bernadette Kawaley, and all of the other members of OCTMA who helped with the logistics for this meeting. And I hope you all have a safe trip back to your homes. Thank you.

(Whereupon, the meeting was concluded.)

December 6, 2007 Transcripts

Back to workshop information

 
Updated: March 21, 2008