skip navigation links 
 
 Search Options 
Index | Site Map | FAQ | Facility Info | Reading Rm | New | Help | Glossary | Contact Us blue spacer  
secondary page banner Return to NRC Home Page
                       UNITED STATES OF AMERICA
                     NUCLEAR REGULATORY COMMISSION
               ADVISORY COMMITTEE ON REACTOR SAFEGUARDS
                                  ***
            MEETING:  PLANT OPERATIONS AND RELIABILITY AND
                     PROBABILISTIC RISK ASSESSMENT
     
                                  ***
                        U.S. Nuclear Regulatory Commission
                        11545 Rockville Pike
                                           Room T-2B3
                        Rockville, Maryland
                        Tuesday, January 26, 1999
         The subcommittee met, pursuant to notice, at 8:30 a.m.
     MEMBERS PRESENT:
         GEORGE APOSTOLAKIS, Chairman, ACRS
         JOHN BARTON, Member, ACRS
         MARIO FONTANA, Member, ACRS
         THOMAS KRESS, Member, ACRS
         DON MILLER, Member, ACRS
         DANA POWERS, Member, ACRS
         ROBERT SEALE, Member, ACRS
         WILLIAM SHACK, Member, ACRS
         GRAHAM WALLIS, Member, ACRS.                         P R O C E E D I N G S
                                                      [8:30 a.m.]
         DR. BARTON:  The meeting will now come to order.  This is a
     joint meeting of the ACRS Subcommittee on Plant Operations and
     Reliability and Probabilistic Risk Assessment.  I am John Barton,
     chairman of the Subcommittee on Plant Operations.  Dr. George
     Apostolakis is chairman of the Subcommittee on Reliability and PRA.
         ACRS Members in attendance are Mario Fontana, Thomas Kress,
     Don Miller, Dana Powers, Robert Seale, William Shack, Robert Urich, and
     Graham Wallis.
         Also in attendance is Mario Bonaca, who has been selected as
     a new Member.  He is expected to be appointed to the ACRS in the near
     future.
         The purpose of this meeting is to continue the subcommittee
     reviews of proposed improvements to the NRC inspection and assessment
     programs, including initiatives related to development of a risk-based
     inspection program and performance indicators.  Subcommittees will
     gather information, analyze relevant issues and facts, and formulate
     proposed positions and actions as appropriate for deliberations by the
     full Committee.
         Michael T. Markley is the cognizant ACRS staff engineer for
     this meeting.
         The rules for participation in today's meeting have been
     announced as part of the notice of this meeting previously published in
     the Federal Register on December 21, 1998.  A transcript of the meeting
     is being kept and will be made available as stated in the Federal
     Register notice.
         It is requested that speakers first identify themselves and
     speak with sufficient clarity and volume so that they can be readily
     heard.
         We have received no written comments or requests for time to
     make oral statements from members of the public.
         I note that we've got Bruce Mallett on video from Atlanta. 
     Bruce, can you hear us?
         MR. MALLETT:  I can hear you fine.  Can you hear me?
         DR. BARTON:  Yes, we can.  And I also understand that you
     have another commitment, and we lose you at what time?
         MR. MALLETT:  I have to be out of here by ten o'clock.
         DR. BARTON:  All right.  So at this point we'll proceed with
     the meeting, call upon Frank Gillespie, Mark Cunningham -- Mark's not
     here -- who's got the lead here this morning?
         MR. JOHNSON:  Frank.
         MR. GILLESPIE:  I do.
         DR. BARTON:  All right.  Do you intend, since we're going to
     lose Bruce, we're going to kind of get his inspection stuff tied in
     early?
         MR. GILLESPIE:  Yes.
         DR. BARTON:  Since we did meet in December, and sent a
     letter up, the things that we were interested in that you were still
     working on at that point I believe was the work on the assessment
     process.  So if we can spend some time on that today, the transition
     plan, and hopefully can answer some questions on the package that you're
     asking the Commission to proceed with has got some, you know, future
     work needed, some holes in it, how you plan to address those issues. 
     We'd be interested in that also.
         So at this point, Frank, you've got the show.
         MR. GILLESPIE:  In order to get Bruce on pretty quickly, and
     we're not going to repeat everything we said the last time, we've got
     that fat package out, and it pretty much lays out all the details.  So
     rather than go through all the viewgraphs we have here, which are kind
     of a subset of the Commission viewgraphs we use, there is one on the
     concepts.  We're going to I think just get right into it.
         Our one big hole we identified with the Commission just to
     highlight it, the Commission said that they're interested in, is a
     risk-informed screening measure for dealing with inspection results that
     aren't really conducive to quantification.  And we're still working
     that, and Alan Madison's going to be here a little bit later to go over
     the details of that.
         DR. BARTON:  All right.
         MR. GILLESPIE:  And that was probably the one biggest policy
     hole that we had I think when we addressed this with the Commission that
     they were looking forward to seeing in March.  And probably one of the
     more -- I don't want to say intellectually, academically challenging
     things to do is to come up with something that the inspector in the
     field, who's basically only had a two-week risk course, can apply and
     will get a meaningful screening process out of.  So Alan will come. 
     We've made some progress on that.  And he'll be explaining that a little
     later.
         So with that, rather than go through all of these, the
     schedule is there, our transition plan is there, we'll hit that at the
     end.  We're on kind of a tight schedule in that we're going to try to
     have eight pilot plants started in June.
         DR. BARTON:  These are six-month pilots?
         MR. GILLESPIE:  Yes, we're going for six-month pilots.  We
     feel we need at least a half a year to just exercise the system and work
     the bugs out of it, that just doing a three-month pilot isn't going to
     give us a chance really to observe it.
         DR. POWERS:  I guess the question that came to my mind was
     the opposite.  Why isn't it a full cycle, a full-year pilot?
         MR. GILLESPIE:  There's a certain amount of impatience on
     people's parts --
         DR. POWERS:  I understand that.
         MR. GILLESPIE:  To get on with it.
         DR. POWERS:  But it seems to me you don't get through the
     whole shooting match in one consistent pilot by going with six months. 
     You're guessing a lot.
         MR. GILLESPIE:  Probably the -- it's not guessing.  Six
     months actually gets us through the basic inspection cycles, gets us
     through a planning cycle, and because the reporting is going to be on a
     quarterly basis with a one-year running average, once you've reported
     one or two quarters' worth of material, you've basically exercised it
     twice.  The first time you've made a lot of mistakes, the second time
     hopefully you've made fewer, and now you can kind of clean it up.  So we
     would have exercised reporting through two cycles.  We would have
     exercised inspection planning.
         What we are likely going to be missing will be those things
     that might only be done once a year or once every two years, like the
     engineering inspections, but the engineering inspection is, quite
     honestly, because there's no PIs in engineering, is fundamentally what
     we're doing today, because we're picking high-risk systems.  So what
     we're not testing in the six-month period is actually the things that
     might be very similar to what we're already doing.
         With that, I'm going to turn it over to Bruce right away,
     because he's got an interesting section and one that's got a lot of
     implementation questions still left on how we're going to actually
     organize it to get it done.  So Bruce --
         MR. MALLETT:  Thank you, Frank.  Mike, were you able to
     receive the two slides that I sent?
         MR. BARANOWSKY:  No, I did not yet.  I'll get someone to run
     for them.
         MR. MALLETT:  Mike Johnson.
         MR. JOHNSON:  Yes, Bruce.
         DR. BARTON:  George, do you have a question at this point?
         DR. APOSTOLAKIS:  Yes.
         DR. BARTON:  Bruce, can you hold a second?  We've got a
     question here at the table.
         MR. MALLETT:  All right.
         DR. APOSTOLAKIS:  Well, I want to ask, as I was reading this
     report, the question of the objectives of the whole exercise came to
     mind, and I'm wondering when it would be a good time to raise that
     question.  Shall we let Bruce go on first, or --
         MR. GILLESPIE:  No, I think that's probably a good time,
     because there's a good lead-in to one of the criticisms we got of the
     inspection section was it did not -- and, Bruce, chime in if I get this
     wrong, but it did not clearly state the objectives we were trying to
     meet when you read the inspection section by itself.  So, no, this is a
     good time to ask --
         DR. APOSTOLAKIS:  Okay.  Let me tell you then what the
     concern is.  On page 2 of the document it says this paper presents
     recommendations for improving the NRC's reactor oversight processes
     including inspection, assessment, and enforcement, and includes a
     transition plan.
         Now it doesn't really say what the objective of the
     oversight process is, but then on page 3 it says, the first full
     paragraph, the objective of the inspection task group was to develop
     recommendations for a baseline inspection program blah blah blah blah
     blah to determine whether plant performance is at an acceptable level.
         Now I am not sure that this is what the objective of an
     oversight process should be.  I see this as equivalent to what people
     call quality control in manufacturing.  You have a process which is
     acceptable.  I mean, the plants have already been licensed, right. 
     They're operating.
         MR. GILLESPIE:  Right.
         DR. APOSTOLAKIS:  It seems to me that the purpose of the
     oversight exercise it to make sure that what you have licensed, what you
     think you have licensed is indeed the truth.
         So, now, then we go to Attachment H.  If that premise is
     correct, then if I look at the figures H1 through 6, which are pages
     H-18 on, these are the actual data that I believe NEI supplied you or
     some of it is -- yes, okay.  So, for example, figure H1 on page H-18 is
     the number of unplanned scrams per year for a number of units.  It's not
     all the units, right?  For a number of units.  And the same thing, the
     next one is the BWR high-pressure injection system unavailability and so
     on.  This is real life.
         In the text it says that the way to define one of the goals,
     I think it's the -- I don't remember the colors now, green to --
         MR. GILLESPIE:  Green to white.
         DR. APOSTOLAKIS:  Green to white.  Is to consider that
     number of unplanned scrams that about 95 percent of the plants meet or
     have a smaller number of scrams.  And then we make that part of the
     assessment process and say this is a threshold.
         It seems to me that this runs counter to the notion that the
     purpose of the oversight process is to make sure that what the plants
     have been doing, what we think they have been doing, is indeed correct. 
     In other words, you are forcing now this way the five percent of the
     plants that are above to actually come below the threshold.  In other
     words, you are regulating them instead of just overseeing them.  Isn't
     there a difference?
         You are telling the -- in fact, if you go to one of the --
     yes, for example, figure H4, BWR RHR system unavailability, the
     threshold is set at .015, and there are 1, 2, 3, 4, 5, 6, 7, 8 plants
     that in fact have an RHR unavailability greater than .015.  So aren't
     you sending them a message that they better make sure that their RHR
     unavailability comes below .015, which may be a good message, but it's
     not the job of the assessment process to do that.
         In other words, the way I would do it, I would say okay, Mr.
     Plant 70, you have an RHR system unavailability .025, it's the highest,
     this is what you have in your IPE, and your whole IPE was accepted by
     the staff or it was reviewed by the staff, and you have a license.  So
     all I have to do now from now on is make sure that you don't become
     worse than .025, not that you reduce it down to .015.  That's a separate
     regulation.  The oversight process' job is to make sure that what we
     have licensed and the assumptions and all that is still the way we
     thought it was.  It's quality control, in other words, not regulation.
         Which brings up now another.  If that is the case, that's
     the second point now, the first point is we are in fact intervening with
     this process, it's not oversight anymore.  The second part I think is
     also a conceptual difficulty.  Let's say you agree with me.  I don't
     expect you to agree with me right away, but let's say you agree with me.
         DR. SEALE:  You're going to give him a little time.
         DR. APOSTOLAKIS:  So you say okay, fine, so even for this
     plant where the RHR unavailability is kind of high, they probably have
     other things that are much better on the average, so the total core
     damage frequency came out to be some reasonable number.  So now we make
     sure that the oversight process will monitor the plan to make sure that
     the IPE -- the IPE's assumptions and the numbers and so on remain the
     way they were submitted.
         But the plant has not been licensed on the basis of the IPE. 
     Right?  It's the whole --
         MR. GILLESPIE:  Right.
         DR. APOSTOLAKIS:  Deterministic.  So can we really do that? 
     Can we regulate, can we have an oversight process that is based on a
     study that is not part of the licensing basis?  Is that allowed?
         Which then creates the question, should we again go back to
     Part 50 and make sure we change things there before we develop this?
         MR. GILLESPIE:  Two things, George.
         DR. APOSTOLAKIS:  Yes.
         MR. GILLESPIE:  Yes, we're out of sync, because inspection
     is ahead of Part 50 -- risk-informing Part 50.
         DR. APOSTOLAKIS:  That's correct; yes.
         MR. GILLESPIE:  So that's true.  The other piece is we don't
     have the sophistication to do this plant by plant by plant right away.
         The third piece is what's probably missing in clarity, and
     we -- I tried to make this clear at the Commission briefing -- is that
     this piece and the way these are used and the way they're used in
     inspection isn't intended to be a backwards ratchet.
         The baseline program is an indication program.  Even the
     inspection program is an indication program.  It is not diagnostic.  It
     is not intended to force someone to be better than they can be.
         DR. APOSTOLAKIS:  Aren't we de facto doing this though?
         MR. GILLESPIE:  Could that happen?
         DR. APOSTOLAKIS:  Yes.
         MR. GILLESPIE:  Yes, and if you set up a standard and say,
     you know, pass this standard it means be go into more of a diagnostic
     mode to understand why and that is the real point here.  That is the
     objective of the thresholds is if someone breaks the threshold it is to
     engage them to ensure that we as regulators understand why you might say
     in this case that they are different from the expected norm.
         DR. APOSTOLAKIS:  So it becomes plant-specific at that
     level.
         MR. GILLESPIE:  And at that point you become plant-specific
     and engaged doesn't mean send out a 20-man team.  It means first you'd
     probably end up asking a question -- why?  If there is an explanation,
     we understand it -- then there's an explanation.  We understand it, and
     that is when it becomes plant-specific.
         DR. APOSTOLAKIS:  I understand that but again there is a
     fundamental concern here that -- and there is a third point -- but first
     of all, I don't think it's such a monumental job to do it on a
     plant-specific basis, because we don't have to do it.  We can ask the
     utilities to do it.  We can give them guidelines how to do it but we
     don't have to do it on a plant by plant basis.
         For example, if I go to Figure H-2, look at Plant Number 62
     on page H-19.  It is a BWR.  High pressure injection system
     unavailability is the lowest of all the plants that are listed here. 
     It's less than .01 and you have already set the threshold at .04, right? 
     It's really low.  Now that plant probably wants to get some credit for
     it.  Maybe somewhere else -- they are not named, right? -- somewhere
     else they have an unavailability for something else that is higher than
     the threshold, so they may come back or they should come back and say,
     look, on balance our risk is acceptable and so on and so on, but if we
     look at each one of these individually, which is the third point I
     wanted to make and I forgot, we are regulating -- I mean the idea of the
     cornerstones was not really to continue to be as intrusive as we have
     been.
         We don't want to regulate initiating events and the pressure
     boundary and inspect the same way we were doing it before and I think
     that is the path that this is going down, because now we have thresholds
     for initiating events, thresholds for unavailabilities of various
     systems, and so on, so it will be as intrusive as the regulatory system
     and inspection process, as it was.
         How can we give them more flexibility, which is the whole
     idea of performance-based regulation?  Remember, these plants have
     already been licensed.  Even if the core damage frequency is greater
     than the goal, there's nothing we can do about it, unless it is 'way up
     there, which is it not.  Right?  Because that was not part of the
     licensing basis.
         So can we relax a little bit this and instead of looking at
     the systems and the initiating event frequencies and so on propose a
     methodology perhaps that the utilities can implement for them to come up
     with individual -- not goals.  Thresholds?  I don't know -- using some
     sort of systematic approach.
         MR. GILLESPIE:  What we have generated here is kind of a
     generic profile of certain operating parameters --
         DR. APOSTOLAKIS:  Yes.
         MR. GILLESPIE:  And could you generate a plant-specific
     profile for each site, given site differences and engineering
     differences?  In the future the answer is probably yes, you could.
         We started from the general and now we are going to the
     specific.
         DR. APOSTOLAKIS:  I understand.
         MR. GILLESPIE:  And right now we are at the generic.  I
     don't disagree as a future goal it would be very nice to have a
     plant-specific profile but we don't have it --
         DR. APOSTOLAKIS:  But aren't we over-stepping the bounds
     though of the oversight process -- telling those 5 percent of the plants
     that they better shape up.
         MR. GILLESPIE:  We are not saying that.
         DR. BARTON:  George, I don't think that is what they are
     trying to do here.  I think they are looking for trends.  They are
     looking for plants that all of a sudden are above a norm -- and the
     industry is using one of the performance indicators which looks an awful
     lot like this kind of stuff so the industry is already familiar with
     this.
         I think what they are looking for is a plant that is below
     this .04 and all of a sudden it shoots up to, you know, .07, and you
     say, well, why?
         DR. APOSTOLAKIS:  If I owned a plant that already has at BWR
     HP injection system unavailability of .07 -- Plant 75 -- and the NRC as
     part of its oversight process says the threshold is .04, what do I do?
         MR. GILLESPIE:  Could this system be abused to that point? 
     I guess the answer would have to be yes, but that is clearly not the
     intention.  The intention is it's an indication system.  At that point
     as the regulator we would want to understand why they are different and
     what offsets it.  It is not clear.
         Now maybe if we had a detailed, plant-specific profile that
     was completely justified, it would be clear, but we don't have it and
     right now the industry hasn't really proposed to do a plant-specific
     kind of profile.
         MR. MALLETT:  But Frank, let's be careful here.  Let's make
     sure we understand the process here is that we have a band of operation
     for plants and those plants in that band of operation might be, as
     George said, from .04 to .07 unavailability of that system and therefore
     when you look at them you may need nothing more than a baseline
     inspection program if they are in that band of performance.
         It is only supposed to be an indicator of whether we would
     increase our inspection above a minimum level, so the concept is every
     plant will get a baseline minimum inspection and if the indicators show
     they are outside those bands or thresholds we would increase at least a
     part of that.  They could very well operate within that band and we'll
     leave them alone.
         DR. APOSTOLAKIS:  But Bruce, you are bringing up another
     issue now.  You are saying that what counts is the band not the
     threshold, and if I didn't speak to you and listen to you, I wouldn't
     know that.  The document does not say that.  In fact, that may be a way
     out since I fully appreciate, by the way, that you had to do it in a
     generic way.  Perhaps instead of a threshold give a band because what
     you don't want to do is introduce new requirements through a document
     that is supposed to be an oversight and assessment document.
         MR. GILLESPIE:  Yes.  This is fundamentally an internal tool
     for us to know when we have to do and understand more.
         DR. BONACA:  I would like to make one comment.
         MR. GILLESPIE:  Go ahead.
         DR. BONACA:  Simply it seems to me that, yes, I can
     understand it is an internal tool but then it introduces again
     subjectivity and one goal is go away from subjectivity.
         One thing that concerns me, looking at these thresholds --
     that is the same point that George is making -- is they may not be
     equally achievable for all plants at the same level, but again there is
     compensation from other systems that may not -- what I mean is designs
     of plants vary very much for the BWR high pressure injection system.
         Some plants have just one train but they have other things
     that compensate for that, and simply the threshold is not equally
     achievable for all plants, but the reason is there are reasons why,
     however, core damage frequency is equally low because there are other
     systems and so that is what I am trying to understand.
         Is that the point you were making, George?
         DR. APOSTOLAKIS:  Yes.  One of the points.
         MR. GILLESPIE:  We are not disagreeing with that.
         DR. BONACA:  But my only concern is the issue of
     subjectivity, because the question is how is it going to be interpreted
     and used in the field.
         MR. GILLESPIE:  Well, the intention is, just as Bruce said,
     that at that point we have to develop a better understanding, so you go
     from an indication mode into an understanding or diagnostic mode.
         Once we understand it, then it is a done deal.
         Also, this is a starting point, not an ending point and here
     is something I am kind of cautious about and one of the reasons we went
     from generic in that work in the future to specific and that's on the
     other side is all those people who talk about the fidelity of individual
     PRAs and overdependence on them and how can you have two plants, how can
     you have two plants that appear to be virtually identical with two
     different PRA results and different sets of important systems, and how
     do you deal with that uncertainty.
         At this starting point we were not prepared to have
     individual plant by plant specific profiles, so this is our starting
     point.  To do that in the future is a good option, but I can't deal with
     it between now and June very well.
         Is this a de facto different set of performance standards? 
     It could be abused to that extent.  I can't say it couldn't.  It is
     certainly not our intent and that is why we have got it out for comment.
         DR. APOSTOLAKIS:  Well, I appreciate the difficulties you
     have.  I mean don't think that I don't.
         On the other hand, we have to evaluate these things and
     scrutinize them because this is the first time that we are really doing
     such things in a big way.
         First of all, I think that the document, the written
     document, should reflect some of the thoughts that you just gave us,
     Frank.  For example, we had a similar -- not quite the same but similar
     discussion in the development of the Regulatory Guide 1.174 whether the
     lines were bright to different shades of gray and so on, and what Bruce
     said about maybe the band really being more meaningful -- now I don't
     know if you already have several bands of regulatory action and now you
     are making the boundaries between the bands themselves -- what all that
     means -- but also maybe you should give the option to a utility to come
     back and argue on the basis of their IPE or PRA that for them BWR high
     pressure injection system unavailability of .07 makes sense and they
     will maintain that, but they have other things that compensate for that.
         Right now, there is no option like that here.  It is generic
     thresholds and the truth also is that very few people really understand
     the subtleties of all this.
         I mean you are saying there is a danger for abuse.  I think
     the probability is pretty high that well-intentioned people will abuse
     this because they don't understand the fundamental idea of overseeing
     probabilistically -- not that I understand it.  It took me a while to
     understand this and make the points.
         MR. GILLESPIE:  Well, one of the criticisms of the package
     was that we didn't have a clarity of purpose in there, and that is
     something we're going to fix -- the idea that this is strictly an
     indication process and which you then go into a process of understanding
     when the indication says more understanding is needed.
         I can't fix it here today --
         DR. APOSTOLAKIS:  I know.  I am just raising the issue.  I
     know.
         MR. GILLESPIE:  The ideal would be plant-specific profiles.
         DR. APOSTOLAKIS:  Yes.
         MR. GILLESPIE:  Because just by the nature of the
     statistics, we are agreeing that 5 percent on any given indicator could
     be outside the band and that's perfectly acceptable.
         DR. APOSTOLAKIS:  Right.
         MR. GILLESPIE:  There's tails on every distribution but
     being in the tail doesn't mean you are bad.  It means we need to
     understand why so we know you are not.  We do need to put that clarity
     in it.
         How we get there from here and how fast we can do it I don't
     know, and the other piece we need to be concerned with as we need to
     part from generic and go to plant-specific is how do we communicate that
     to the other stakeholders.  We may understand it and the industry with a
     PRA group at each facility may understand it, and it's kind of a
     compromise in there in a somewhat more simplistic view in trying to have
     something that we can also communicate to the public.
         There is some simplicity driven into this.
         We also came up with another point of confusion which we are
     starting to get clarified with people, and that is that there are two
     very important matrices, one that Pat is going to go through which apply
     to individual indicators and will then -- also a parallel process to
     apply to individual inspection results.  That is an individual
     indicator.  It is one out of 20 indicators.
         Then there is a matrix that Mike has in the assessment
     process which is the matrix that covers expected agency actions or
     regulatory response.
         Initially when people thought about the concept they talked
     about green plants.  We said but there's no green plant.  There is a
     green indicator.
         Also clarifying the difference between a threshold and a
     band -- well, thresholds establish bands but the context we use it in is
     really a licensee response band, regulatory response band, and the
     terminology if you eliminate the colors, the terminology in there is
     actually focusing on the bands.
         Now the interesting point here is that the industry has -- I
     don't want to say -- I guess I could say has tentatively agreed that the
     licensee response band as a first cut on this is reasonable.  They are
     not saying it is perfect, but they are saying it is reasonable as a
     first start, that people should be able to actually have some things go
     wrong at a facility and they should be able to correct them before they
     pass out of the band.
         And that's a broad expectation of establishing the width of
     the band, be it three scrams or the unavailability numbers.  What we
     don't want is an indicator system that trips on a failure, on a single
     failure, or on a single thing.  So, 95 percent performance over the last
     several years as in industry set, that is kind of how we picked it. 
     Then looked back and said, well, is this low risk?  And the answer to
     both of those was it looks achievable, in general, recognizing there is
     a 5 percent tail on every indicator, and it is low risk.
         I don't know that we can do much more at this point, but
     this is a starting point.  That's the other emphasis here.  This is a
     starting point.  We have got a list of about six different topics from
     research on longer-term on how to mature this and make it better.  So --
         DR. APOSTOLAKIS:  I think that by putting these thoughts in
     the document and starting out by saying, you know, ideally, one would
     like to have a plant-specific PRA base on this.  If a utility wants to
     do it, we are open to suggestions, you are very welcome.  Now, we have
     to do it in a generic way, so we have to select a way for doing it.
         Now, one way might be to say, well, I will take the highest
     value in every chart and go with that.  Well, we don't want to do that
     for some reason.  So we want to be generic.  We have this requirement. 
     We go with the 95th percentile, but we will -- this will be an
     indication, blah-blah, everything you just told us.
         MR. GILLESPIE:  Yeah.
         DR. APOSTOLAKIS:  And maybe soften the impact.  Now, let me
     tell you what really bothers me at a philosophical level without
     necessarily attacking this document.  We can work on a PRA space and be
     as prescriptive as we are now in the deterministic space.  If we ever
     produce a document that will say, okay, it will be risk-informed, so now
     we are going to tell you what a threshold, a regulatory threshold is for
     scrams per year, a regulatory threshold is for unavailability of this
     system and that system, I would really object to that, because now we
     are being very prescriptive in PRA space.  It is none of our business. 
     If the accident sequences have the frequencies that are acceptable, why
     should we regulate the initiating events and so on?
         So that is what really struck me, and I am bringing it up,
     that, perhaps unintentionally, you do some of that here by saying, look,
     95 percent is what we are choosing and the other 5 percent -- now, if I
     am the owner of a facility, what do I do?  I get a signal from the NRC
     that my unavailability is not really good, but that is not a job of the
     oversight process, that is somebody else's job.
         So, now, for oversight purposes, I think it is all right to
     look at each cornerstone, perhaps look at other things below the
     cornerstones, because you are overseeing now, you are making sure that
     what they told you remains that way in the future, so that is not
     intrusive in my mind.  But if you demand, though, that the frequency be
     below a certain level, that is regulating the frequency of initiating
     events and that, I don't think we should do.
         MR. BARANOWSKY:  There is no demand associated with these
     green-white thresholds.  They are exactly what you said, and, that is,
     they are a point of oversight in which we increase our oversight because
     we have indication that we should start to look in those areas.
         When you get to the point of unacceptable thresholds, where
     we talk about talk about taking actions such as shutting down a plant,
     that is a different story.  Then maybe that is the point where we are
     imposing some regulatory action on a licensee.  But this is oversight,
     minimal oversight when you are below that line, a little more oversight,
     which seems to make sense.  Where are you going to spend your resources? 
     Well, where it seems like the performance is not within the norms, and a
     few sigma outside of the norms is the point where you say, is that an
     outlier?
         DR. APOSTOLAKIS:  But you see, Pat, that is exactly my
     point.  That is why I asked what is the objective of this.  Aren't you
     bringing now another element in the regulations, namely, how does your
     plant perform with respect to the others?  When you licensed it, did you
     do that?  No.  When you licensed it, you said here is a set of
     regulations, if you meet them, you get your license.
         Now, we are saying something else.  We are saying but if you
     happen to be outside the bound of what everybody else has.
         MR. BARANOWSKY:  That was just done a convenience because --
     I think you are right, we would like to have a more plant-specific
     approach to this, where some plants are more susceptible to high or low
     reliability depending on system design and operating characteristics. 
     But as a first cut, we said let's just start with this more generic
     treatment.
         DR. APOSTOLAKIS:  I think we need a good discussion up-front
     what the purpose is, that you are fully aware of the fact that there are
     certain risks by going this way, but there are some benefits as well,
     because we can do it is, it is generic and so on, and if somebody wants
     to come in here with a plant-specific PRA and argue that their
     thresholds should be different, you are willing to listen.
         MR. JOHNSON:  George, let's not forget also that we do what
     this process is talking about trying to do, we do it today.  We look at
     the performance of plants, we look at it in some areas that we have set
     up for ourselves.  We make decisions about events as they occur on a
     real time basis.  We decide whether we need to engage, to do more
     inspection based on the instincts of the inspectors.  I am being a
     little bit cavalier, but we use our judgment, based on our experience,
     based on our understanding of what is important from a risk perspective
     today to make decisions about whether we engage, about -- the extent to
     which we engage.  Do we send out individual inspectors?  Do we send out
     a team of inspectors?  So on and so forth.
         And what we need to keep in mind with this process that we
     are proposing is that we are simply trying to add a structure and a
     framework to do that in a way that is a hierarchial approach, as the
     ACRS has pointed out, that we need to go on, we think it is a good
     direction to go in.  That starts with this notion that, given the fact
     that we are working towards this mission, this common mission of public
     health and safety, and then that we have had cornerstones which say,
     hey, given the fact that we have got this robust framework, regulatory
     framework, and all the requirements, we can't possibly look at
     everything.  And even if we could look at everything, what would we do
     with all of that information?
         So we have set up this series of cornerstones and PIs and we
     have established PI thresholds and inspection thresholds.  And the
     entire notion is to be able to take a look, from an objective
     perspective, to see when plants are beginning to wander from what we
     accept as the utility response being -- and this notion that the utility
     can respond, and to the point where we need to begin to take some
     action.
         Now, we are not going to be measuring delta CDFs, we are
     going to be looking at numbers of scrams, we are going to be looking at
     some very simple things.  And the whole reason to do that is to be able
     to have some objective way to figure out when we ought to engage or when
     the utility ought to engage, and the extent that we ought to engage. 
     And that is what the action matrix and all those things talk about.
         DR. APOSTOLAKIS:  I understand.
         DR. BARTON:  George, I think utilities are used to working
     this.  You look at the INPO indicator, they have got scrams, they have
     safety system actuation, and they have got goals set for each one of
     those, and utilities try to operate to stay within the goal.  This is
     the same kind of thing.  If you get enough -- above enough of these
     bands, you are probably in trouble.  If you don't meet a lot of the INPO
     indicators, you get a real good look at those areas when they come in
     and do an evaluation as well.
         I think it is a trust thing here.  If get above a couple of
     these targets, is the NRC going to come in and shut you down right away? 
     I think that gives you some nervousness of how this is going to be used. 
     But I -- I don't know, I think it is a good process, to try to figure
     what is a better process than what they have laid out in looking at, you
     know, how are plants performing.  You know, where are plants getting out
     of whack, so to speak, and what is the reason for that?
         If you follow maintenance rule, you have got training,
     procedures and human performance, and all those kinds of things are,
     quote, "okay" at your plant, you are probably not going to have your
     plant above this threshold in these areas.  So all this tells me is I
     see that, you know, what is going at that plant?  What is the
     maintenance history of that system?  What is the problem with the
     maintenance rule at this plant that they are up with this system
     unavailable in the RHR system?  And that is what I think you use this
     for, and I don't have a problem with that.
         Now, if you are going to come in here and beat me up over
     that and shut me down because of it, I think that is the trust issue
     here.
         DR. APOSTOLAKIS:  Well, no, first of all, I think this is
     one of the best risk-informed documents --
         DR. BARTON:  I thought you believed that until this morning. 
     I am wondering what is going on here this morning.
         DR. APOSTOLAKIS:  I do believe it.
         DR. BARTON:  This is six months revisited or something.
         DR. APOSTOLAKIS:  And I want it to become better.  But,
     remember, this is -- no, really, you did a great job, don't get me
     wrong.  I am just arguing a couple of points that I think -- three
     points that I think are very important, and some of them can be remedied
     by writing a better background and introduction.
         But I think, John, it is different from an industry -- when
     I have an industry organization like INPO going to a particular plant
     and saying, hey, this is the norm, you are near the tail or you are
     outside, and so on, it is very different from having a regulator come
     and tell me the same thing.
         Let's say that I am a known recover plant, and I tell the
     NRC I have legal documents here that you have granted me a license.  No
     where in the license do I see initiating event frequency and
     unavailability and this and that.  And now you are going to inspect me
     on the basis and make decisions?  I am going to have my lawyers after
     you.
         MR. BARANOWSKY:  George, we do that now without this
     objective cut.  We see something happen at a plant, with our own
     personal cult and culture and operation, we go do things.
         DR. APOSTOLAKIS:  Right.
         DR. BARTON:  They do it with the SALP process, which is much
     more subjective than this.
         MR. GILLESPIE:  George, one of the --
         DR. APOSTOLAKIS:  No, no, no, no.  Wait, wait.  The fact
     that we are doing it now, though, let me finish the thought, because
     Michael is already -- the fact that we are doing it now doesn't mean
     that it is the right thing to do, and, in fact, I thought that was one
     of the major complaints at the stakeholder meeting, was the inspection
     process, was it not?
         MR. GILLESPIE:  Yeah, the lack of transparency and the lack
     of a standard.
         DR. APOSTOLAKIS:  And too intrusive.
         MR. MALLETT:  This is Bruce, let me make a comment.  Is that
     I think George is right that this is one of the holes that the
     Commission emphasized to us that we need to work on, is establishing
     these thresholds correctly.  And, also, how the inspection findings
     interplay with these performance indicators.  Let's say somebody drops,
     as you said, George, from .04 to .07, I think you were using
     unavailability of the HPSI system, then they -- we have to look at the
     inspection program and what has it shown us in that area as well. 
     Because it all fits together to say, do we believe the objectives of
     cornerstone were met?  That is what we designed the program to do, is to
     determine is the licensee meeting the objective of that particular
     cornerstone.
         DR. APOSTOLAKIS:  Bruce, if the four of you gentlemen were
     doing this all the time for other plants, I wouldn't have any problem. 
     Okay.  I wouldn't have any problem.
         MR. MALLETT:  I want to mention a couple of other points,
     Frank, that are different than what George was raising.  But it is close
     to his original question on what was our objective.  In the
     risk-informed baseline program, we wanted to focus on risk, the planning
     inspections and conducting inspections, and we have got that concept in
     the process.  We also wanted to make it clear why we are inspecting
     something from a risk-informed perspective in linkage to our mission. 
     We have got that in there.
         And, last, we wanted to have risk information put into how
     we select our samples, which is different than today, and we wanted to
     have that in there.  So when you look at why we did what we did, those
     were some of the original reasons that we put into the program.  So I
     recognize we have whole lot of thresholds, but we can't forget some of
     those objectives on why we wanted to what we wanted to do.
         DR. BONACA:  One comment I have, if I could.  I don't see
     why this discussion should not be reflected somewhere in the early
     portion of the document to this nation, because you are using a lot of
     PRA, risk-informed information coming from PRA concepts.  So you have
     got to put some kind of insight in how this is going to be used, the way
     you are describing here.  And I think that that would enhance the
     understanding of this approach for inspectors, in fact, and for
     everybody else.  So it would improve the scrutability of the process
     that you are talking about.
         DR. APOSTOLAKIS:  One of my real concerns here is that we
     will send a message to the rest of the NRC that when you develop
     risk-informed regulations, it is okay to put thresholds on initiating
     events, on unavailability.  Well, it is not okay.  It is not okay at
     all.  That is not the idea of performance-based -- risk-informed,
     performance-based regulation, that is not the idea.  And it is so easy
     to fall into that pitfall, because, you know, it is detailed, it is
     nice, it is something I can measure, but that is not the intent.  For
     oversight purposes, it is okay to look at these things, but not to put,
     you know, regulatory limits, then you are back to a prescriptive system.
         MR. GILLESPIE:  Let me go back to one comment that was made
     here that we didn't address.  And that was, one, this is an indicator
     process, including inspection, and part of what a utility gets, if you
     would, from participating with the indicators, is the inspection program
     won't duplicate the information that that indicator is giving us.
         So there is some dependence on the indicator.  We have a
     reliance that we are not going to duplicate that area.  We're going to
     do some verification inspection.  So there's definitely a backoff, if
     you would, on inspection.  Because the big question was what information
     do we need to make a reasonable assurance finding.
         The other thing is, this is not diagnostic or cause.  When
     an indicator says you need to understand what's happening, the problem
     is still going to have to be discussed and identified in the context of
     the regulations.
         You're right, if we went in and said you've got four scrams
     and therefore we're going to take some action based on that, no.  The
     root cause of the problem is still going to have to have its essence and
     any corrective regulatory action taken is still going to be based in
     regulation and license.
         So when you do get engaged, you tend to fall back on the
     traditional system of what was the cause, what's the requirement, was a
     requirement violated, and if a requirement wasn't violated, then we have
     to say we understand what happened and it's okay, or it's a new
     phenomenon, and we have to do something about it and issue an order or a
     license condition or something.  So it gets us back into kind of the
     today environment when you do become more diagnostic.
         That was the intent.  It's a starting point.  It's not a
     finishing point, and we recognize that.
         The other thing is, it was never our intent to rewrite this
     document, by the way.  This document goes away by June.  This document
     is really the description of the concept that we would use as our
     guidance in writing I'll call it the real program documents.  So this
     description of not abusing things would actually go into whatever we
     would replace, Inspection Manual 2515, which has the philosophy of the
     inspection program in it.  One of the cautions I want to do is I don't
     want to keep rewriting a document that's really not a functional
     document in any sense or any system.  This was a descriptive document. 
     So --
         DR. APOSTOLAKIS:  Is this a public document, by the way?
         MR. GILLESPIE:  Oh, yes, yes.  It's on the Web.  The
     Commission gave us permission to put it out before they even saw it so
     we could get comments.  Comment period has started, and we'll see what
     comments we get.  I agree that -- the Commission said the same thing,
     the context that we were talking to at the meeting was absent by way of
     like an introduction or contextual statement in the front of it.
         DR. APOSTOLAKIS:  Let me ask you another question, Frank --
     or all of you.  One of the particularly sensitive issues with some of us
     on this Committee, and I think with the industry, is that PRA is
     perceived as being a tool for adding regulations or adding burden.  Is
     this document removing any of the burden, the existing burden?
         MR. GILLESPIE:  Yes.
         DR. APOSTOLAKIS:  Okay.
         MR. GILLESPIE:  In fact, if you look in the inspection
     section, there are some hours there, and we didn't project it all the
     way forward, but I had given this information to Sam, and he used it at
     the Commission meeting, so now it's in a public transcript, so I feel
     free to use it again.
         This probably reflects a 15 to 20 percent reduction in
     overall inspection.
         DR. APOSTOLAKIS:  Okay.
         MR. GILLESPIE:  In two phases.  When Pat ballparked how many
     sites would actually meet the entire profile as being in the licensee
     control zone, it's in the ballpark of about 50 percent of the utilities. 
     So about 50 percent of the utilities that are now getting some level of
     reactive kind of inspection from us would no longer be getting that
     level of reactive inspection.  So I think there's going to be kind of an
     immediate incremental benefit from being risk-informed and being
     focused.
         And now the question would be is -- I think we need to be
     cautious as regulators, and we're generally conservative -- is as you're
     fine-tuning to a plant-specific nature maybe, that as you fine-turn, you
     don't lose some sense of checks and balances that you've tried to build
     into the system.  And if you do lose it, you know what you're losing.
         So yes, it looks like about a 20-percent savings.  The
     immediate piece in the baseline program itself from the core to this
     baseline looks like it's approximately 15 to 20 percent.  So that's 15
     to 20 percent of the core, which represents about half the inspection
     program across the board.  And then there's about another 20 percent
     which is the reactive effort, which might not take place.
         Now what's going to happen, George, and you're absolutely
     right, is I could say the safety gain here is everyone's going to strive
     to be in the best category.  It's a possibility.  That's a business
     decision.
         DR. APOSTOLAKIS:  Which is not a bad idea.
         MR. GILLESPIE:  Yes, I mean, you know, I mean, I have to
     recognize that.  I'm not living in a vacuum, because utilities every day
     make the choice do I go along, and this is one of the criticisms.  So
     I'm not saying this as a fact, but do I go along with the inspector and
     just fix it, or do I fight it.  I mean, that's the kind of a thing you
     hear from the regulatory impact survey going back ten years.
         It would be a business decision that they would strive to
     stay in the licensee control zone in all of these areas.  Now do I feel
     that we've kind of agreed if you would in setting these thresholds with
     credible thresholds, reasonable thresholds, I think we have.  You know,
     I'd say the best shot is 95 percent.
         DR. BARTON:  They are reasonable thresholds.
         DR. APOSTOLAKIS:  Yes.
         MR. GILLESPIE:  They're reasonable thresholds.  Now that's a
     business decision.  It could happen.  And that could happen without us
     doing any arm-twisting.  Someone could say you know, if there were
     really --
         DR. APOSTOLAKIS:  All you have to do is show up, Frank.
         MR. GILLESPIE:  Okay.  Okay.  All we have to do is --
         DR. BARTON:  I think we need to get on with Bruce.  We're
     going to lose him.
         MR. GILLESPIE:  Let's get to Bruce.  Bruce only has about a
     half-hour.  I think we're in violent agreement.  And the Commission was
     also in agreement that we need some contextual words in the front of
     this to put it in context.  I don't want to rewrite the document,
     because the document actually will go away within six months.
         Bruce.
         MR. MALLETT:  You should have in front of you -- I think the
     easiest way without repeating our last meeting is one slide that I call
     a flow diagram that's used -- the scope of the program, planning
     inspections and selecting.  Do you have that in front of you?
         DR. APOSTOLAKIS:  Yes.
         MR. MALLETT:  Okay.  Let me walk through that just briefly. 
     I don't want to bore you, but I want to highlight some of the concepts
     we had previously in the program.
         First, we've been talking about an oversight framework where
     we have performance indicators, a baseline inspection, and an assessment
     process.  That led us to the cornerstones of safety concept in which we
     have seven cornerstones.  Those then defined inspectable areas and gave
     us inspectable areas out of that.  And George Apostolakis asked us
     before to put in charts showing that linkage from the inspectable areas
     to the mission of the Agency and to the cornerstones of safety, and,
     George, we did that there in Appendix 2 to Attachment 3 on the document. 
     There's a chart for each of the cornerstones that shows inspectable
     areas and performance indicators.
         DR. APOSTOLAKIS:  I've seen that, Bruce.  Yes.
         MR. MALLETT:  Okay.  I just wanted to let you know we do
     listen to you, George.
         We also --
         DR. SEALE:  Never doubted it.
         MR. MALLETT:  The document I'm going to describe these
     concepts out of for the baseline inspection program are risk-informed is
     in Attachment 3 to the Commission paper.  There are nine sections to
     that document.  Each section describes a particular concept of the
     program.  For example, one that you asked us for, Dr. Seale, you asked
     us the last time to make sure we put in resources.  We put in a
     projection of that in section 8 of that Attachment 3.
         Let me continue looking at this chart with you to just
     describe a few concepts.  First of all, the scope of the program is
     defined by the cornerstones and these inspectable areas, but one of the
     goals we had up front was to define why we inspect.  So we put that in
     something called a basis document, and that's why we drew an arrow from
     basis documents up to scope.
         The next thing you would do in this baseline inspection
     program, you would plan inspections.  And we developed something called
     a risk-informed matrix.  We call it RIM No. 1.  Those are also an
     appendix to Attachment 3 to the document.  I believe it's Appendix 3.
         If you look at RIM No. 1, it outlines for each inspectable
     area what the frequency of inspections are going to be, how many
     inspections you might do, and how many samples you might select.  We put
     ours in there as a way of budgeting.  One of the points of contention
     for the Commission, you ought to realize, they challenged us on was it's
     a great concept, Bruce, but for a lot of the inspectable areas, you
     don't have much definition of the sample.  And they are right on that. 
     We need their -- as we go through the pilot and using this program
     further develop what samples we'll have and how many.
         The next thing where we put risk into the process is in
     selecting the sample.  Once you've planned on your 12-month cycle and
     your plant performance review in each region, you'll take RIM No. 2 then
     and select the system that you might want to look at.  That's generic. 
     Plant specific, you'll have to modify that by using the SRAs and any
     plant-specific information to determine your sample.
         And then the next thing you do in the process is you
     inspect, and we've got the procedures concept set up as a procedure by
     cornerstone to perform the inspection.  And I'm going to -- procedures
     are listed in section 4 of that Attachment 3.
         And the last thing you'll do is you'll assess those findings
     much like today.  We envision you'll have a plant information matrix
     type summary of the finding, but we envision that you'll have a risk
     scale, and this is another hole right now, that this risk scale has to
     be developed to set what we're thinking of as maybe three areas.  You'll
     have a performance that we believe is not risk-important.  You'll have
     another category we believe that's very risk-important in your finding. 
     And then one that just says it's a finding, but it's an average
     performance.
         DR. APOSTOLAKIS:  Would you --
         MR. MALLETT:  We haven't developed this much -- yes, George.
         DR. APOSTOLAKIS:  Would you put another arrow there under
     findings assessment to bring up again the plant-specific nature of these
     things?  In other words, you do your inspection, and then when you
     evaluate what you found, you look again at the plant.  Unless that's
     understood.  Maybe that's understood.
         MR. GILLESPIE:  Yes.  George, let me kind of come out of the
     closet a little bit on how we're working this.  We actually right now
     kind of have a draft which has two scales, two steps right now, and then
     there will be a third step.
         One is a generic screening out of totally risk-insignificant
     anyplace kind of items.  Then there's another step which gets you into
     is it risk-significant in a sense at that facility.  And then there's a
     third step that says the inspector isn't in a position to make this
     judgment, we need the SRA or we need some more analytic help on the
     bottom.  So it's kind of a multistep screening process, because we need
     something that an inspector can quickly and rapidly use for the majority
     of the cases.
         DR. APOSTOLAKIS:  Okay.
         MR. GILLESPIE:  So, yes --
         DR. APOSTOLAKIS:  That makes sense.
         MR. GILLESPIE:  We're getting to that point, because it's
     the risk at the facility that we have to focus on.
         DR. APOSTOLAKIS:  Okay.
         MR. MALLETT:  Okay.  The last point I wanted to make is that
     we put down a block there that says cornerstones objective met, yes or
     no.  The idea of that is to show that this is a minimum baseline program
     to be performed at all plants.  If they meet the objectives, they'll
     continue to receive the baseline program.  If they don't, through as we
     talked earlier, through crossing some threshold or from an assessment
     finding, meet the objective, we want to increase the inspection above
     that for that plant.
         So, George, you asked the question of reducing burden.  An
     implication of this is that currently, you know, we have set out at some
     sites rather than inspectors at the N level per number of units, we have
     also in some cases set at the N plus 1 level.  If that plant is in the
     band where we believe they're meeting the cornerstone objectives, the
     new program will say they will only receive the baseline inspection
     program, you'll have to decide what you do with that other that is there
     so that they don't receive more than that.  So that's a big implication
     of this change.  But if we carry it through, that will be certainly a
     reduction of unnecessary burden of a licensee's facility.
         I can mention more, but what I want to do is refresh you on
     those concepts and then leave it to any questions that you might have.
         DR. APOSTOLAKIS:  Which attachment has the inspection areas?
         MR. GILLESPIE:  Three.
         MR. MALLETT:  That's Attachment 3, but the Commission paper,
     the SECY paper, the main concepts are just discussed or described on
     page 12 of the Commission paper, and the implications for policy I just
     mentioned are on page 15 of the Commission paper.
         It also listed -- I'll mention one more thing -- in
     Attachment 3 in the beginning you'll find an executive summary and in
     that executive summary we list the work we believe still needs to be
     done prior to conducting a pilot program.
         DR. APOSTOLAKIS:  Now the inspectable areas, Bruce, like
     Attachment 3, I guess arabic 1, page 1, program -- Part 1, inspectable
     areas.
         MR. MALLETT:  That's correct.
         DR. APOSTOLAKIS:  Attachment, page 1.
         First of all, this idea of complementary and verification
     inspection I think is very good.  You are really trying to discriminate
     as much as you can.
         MR. MALLETT:  Now I have to give credit to Pat Baranowsky
     and his group for that --
         DR. APOSTOLAKIS:  He just stepped out of the room, so let's
     hope he reads the transcript.
         The inspectable areas by cornerstone -- how did you come up
     with these?  Was it just your group sitting around the table and saying,
     well, what is it that we need to do here or was there some more
     systematic way, because this is really what you are actually inspecting,
     right?
         MR. MALLETT:  What we did was originally we sat around the
     room or Pat Baranowsky's group sat around the room and we said what all
     do we believe we need to inspect to meet these cornerstones objectives?
         I think we had a hundred and some areas that we came up with
     originally, and then as the framework team developed the performance
     indicators and as we met with them and discussed them, we said where do
     we have a performance indicator that we don't need to inspect because it
     already covers that, and we crossed the list off for those, and then we
     said -- we made another cut.  What is risk important?  Well, we crossed
     a number of things off the list that we felt were not risk-important.
         I'll give you an example.  Under the Mitigating Systems
     cornerstone we originally had an inspectable area of observing
     maintenance activities.  We said we don't need to observe maintenance
     activities.  The key part of those from a risk perspective is the
     post-maintenance testing, we thought, and so we left that one in but
     took out observing maintenance activities.
         That is how we arrived at inspectable areas, but one thing
     I'll also mention, George, in Appendix 1 to that attachment in the basis
     documents we added something after we met with you all the last time. 
     We described in there why we inspect, but we also discussed performance
     indicators.
         Remember you all suggested we highlight how those interplay.
         DR. APOSTOLAKIS:  Yes.  In fact, let's go to Attach 1, page
     5, because that is where my notes are.  Attachment 1, page 5 -- where
     you also have the inspectable areas by cornerstone.
         MR. MALLETT:  Right.
         DR. APOSTOLAKIS:  For example, if I look at initiating event
     cornerstones, maintenance rule implementation, maintenance work
     prioritization and control -- why do I have to worry about work
     prioritization and control if they comply with the maintenance rule?
         MR. MALLETT:  For initiating events?
         DR. APOSTOLAKIS:  Yes.
         MR. MALLETT:  What we felt there was there are some events
     where the maintenance caused the problem by not deciding the right -- I
     don't want to say sequence, but not looking at one system they are doing
     maintenance on versus another system, because they didn't prioritize
     their work to do it in the sequence that wouldn't cause that problem
     later on -- like if they had an event while they were working on a
     particular system and then they were doing maintenance on another system
     that that could affect, they need to be careful of the interplay of the
     two.
         That was the idea was to look to see that the licensee is
     looking at the interactions between systems when they do maintenance so
     that they don't cause initiating events for one system by working on
     another system.  Does that make sense?  That's what we were trying to do
     there.
         DR. APOSTOLAKIS:  Yes.
         DR. BARTON:  Understand it.
         MR. GILLESPIE:  George, I think -- isn't there a change to
     the maintenance rule that's been briefed to the committee?  I am not
     sure where it stands.
         DR. BARTON:  We haven't heard that yet.
         DR. APOSTOLAKIS:  We haven't seen it yet.
         MR. GILLESPIE:  It talks about configuration evaluation.
         DR. BARTON:  We know it is coming, Frank.  We haven't heard
     the details of the rule change.
         MR. GILLESPIE:  That may impact this area if it becomes part
     of the rule, but for inspection purposes we're touching it until the
     rule is a rule.
         DR. APOSTOLAKIS:  Now heat sink performance -- we can't have
     a PI for that?
         MR. MALLETT:  We believe you can but we're not going to
     develop --
         DR. APOSTOLAKIS:  Oh, okay, okay, all right.
         MR. MALLETT:  If you look at heat sink performance, we put
     complementary down there --
         DR. APOSTOLAKIS:  Right.
         MR. MALLETT:  -- as saying that you need to do an inspection
     in that area because there is not a performance indicator.
         DR. APOSTOLAKIS:  But there could be one.
         MR. MALLETT:  It's not supplemental to a performance
     indicator but it could be.  Same way for the -- if you look under
     mitigating systems and you look at configuration control, and you follow
     that at inspectable areas, one of them we have there is talking about
     equipment alignment and so forth.  There is nothing for shutdown
     conditions right now, performance indicator, but we maybe could develop
     one for shutdown conditions, for example.
         DR. APOSTOLAKIS:  Anyway, yes, I --
         DR. POWERS:   And that raises a question.  In the document
     there are PIs that you define now and there are PIs that you think you
     might be able to define in the future.
         One of the areas where you did not I think define a PI but
     rather relied on inspection was fire protection, yet the fire protection
     people are off busy trying to create a performance-based code with
     performance indicators.
         Why did you feel that it was unlikely that they would be
     successful?
         MR. MALLETT:  I am not sure, Dana, that we thought it was
     unlikely they would be successful.  We felt at this point in time that
     there wasn't one developed and so --
         DR. POWERS:  I mean that is not the criterion on shutdown,
     so why should it be the criterion on fire?
         MR. MALLETT:  Well, the same way in shutdown we felt that
     there wasn't one developed yet so we would not rely on that performance
     indicator.
         DR. POWERS:  I understand that, but in some cases you have
     said, well, in the future there might be, and you have allowed for that
     possibility that in the future there might be --
         MR. MALLETT:  Oh, I see your point.
         DR. POWERS:  -- but in fire you apparently had little faith
     that this could be accomplished.  Now that may be a very, very rational
     viewpoint but I just wondered why you had that viewpoint.
         MR. MALLETT:  I don't think it was having little faith,
     Dana, as much as we may have had an oversight inconsistency between the
     two writeups and we may have to look at that.
         DR. POWERS:  I think you should, because if that is a major
     initiative that the Staff is undertaking to vote with your feet on it
     may not give them the encouragement they need at this critical juncture
     in their work.
         MR. BARANOWSKY:  I think we identified the performance
     indicators that we thought we might see some development and acceptance
     of in the next six months, and our impression was that the fire program
     performance indicator probably was going to take longer than that to
     develop and get operable.
         DR. SEALE:  You are less sanguine about shutdown?
         DR. POWERS:  In six months we are going to have a
     shutdown performance indicator?
         MR. BARANOWSKY:  Well, there were some proposed by the way,
     and what we didn't have was the time to get the data and look into the
     details of the indicator to see whether we thought it was a really valid
     indicator or not.  The feeling was that it probably could be developed. 
     It was a judgment call.  That's all.
         DR. POWERS:  Well, it may be very sound.
         MR. BARANOWSKY:  You wouldn't be the first to say that it's
     not going to be possible.
         MR. GILLESPIE:  It wasn't our intention to take a shot at
     the fire people.
         MR. MALLETT:  Yes.  We can take Dr. Powers' comment and look
     at it during the pilot program and see if there is some way we can
     factor that in better, give more credit to them -- good point.
         MR. GILLESPIE:  I think simplistically when you look at
     shutdown, you can ask the question how many water sources are available
     and is there power available to pump the water to where it needs to be
     and is the heat sink available?
         All of a sudden I have got specific things that you can
     count at any given time at a shut down facility and say yes, I have got
     two water sources available -- yes, I have power from two different
     sources ready to go to the pumps.  It's actually something we could
     count, so we saw some near-term achievability in shutdown.
         We were not familiar enough -- in fire protection when you
     look a fuel sources and other things, how would you measure fuel
     sources?  It seemed like a bigger problem, so there was kind of, I
     think, a near-term, longer-term perspective when we were doing it.
         I literally mean "we" -- I mean just the team working on it
     saw shutdown was a place we could make progress and fire just needed a
     whole lot more work before you had something that was countable and
     easy -- something you would see from actually the control room.
         DR. POWERS:  You have a tension that runs throughout the
     document between defense-in-depth and some of the rest of the document,
     the rest of the approaches you have.
         One area where you don't have tension, one area that
     everything that one would do for inspection and regulation of safety is
     lined up with defense-in-depth is in fire protection.  The things that
     you would count are the things that make up the elements of fire
     protection -- of defense-in-depth and fire protection.
         Do you prevent fires from occurring?  Do you detect and
     suppress fires?  Do you protect the equipment?  So in a counting basis I
     think that you will find that you can count and what you count are
     things that are explicitly defense-in-depth items.  There is not a
     tension between defense-in-depth and the things you are inspecting the
     way there is throughout the rest of the document -- and that is because
     fire protection is the one area where they have explicitly defined
     defense-in-depth.
         MR. MALLETT:  I guess one other comment I would make on a
     different note than what Dr. Powers was talking about was the last time
     we met NEI said they had two points of contention about the proposed
     program.
         One was they had thought when we set out there were going to
     be more performance indicators than there are now in the program, and we
     indicated that we opened the door for that, but we haven't developed all
     those performance indicators and that is why they aren't in the program
     now.
         The second point of contention they were making was remember
     when we said we wanted to do an identification and resolution of
     problems inspection as a two week inspection once a year and they
     indicated they didn't know we needed to do that.  They thought it was
     repetitive.
         We had left it in the program.  We changed it to once every
     two years but we thought it was important to have in the program to have
     an independent look at the routine inspections we're doing in the
     baseline by individual inspectors and to also have an across-the-board
     look at how they are identifying and resolving problems, but I did want
     to point out to you that that was the point of contention on their part
     and I believe they still have that view even though they support the
     concepts in the program.
         If there aren't any more questions I am going to get off the
     line.  We have an enforcement conference we're going to start.
         MR. GILLESPIE:  That is risk-informed --
         MR. MALLETT:  And they are using risk information.  All
     right?
         DR. BARTON:  Any other questions of Bruce before he signs
     off?  Hearing none, thank you very much, Bruce.
         MR. MALLETT:  Thank you for letting me do it by
     videoconference so that was very good.  Thank you.
         DR. POWERS:  Mr. Chairman, I am now fighting with another
     windmill, to the same success most people have fighting with windmills.
         Have we gone into the details of Appendix H?
         DR. APOSTOLAKIS:  Yes.  Not the details, the overall
     structure, but we spent an hour on it.  If you --
         DR. POWERS:  I guess I need to understand a little bit
     better the quality of the tools that have been used in defining things
     in Appendix H and better to understand the benchmarking that was done in
     Appendix I.
         I think I understand the approach that was adopted in H, and
     I will confess to being confused a little bit over Appendix I.
         Within H, I need to understand you utilized the Sapphire
     Code plus some PRAs that were submitted by the licensees or by NEI
     provided them to you?
         MR. BARANOWSKY:  We used the Sapphire Code with PRAs that we
     obtained from licensees.  There were approximately 13 of them that we
     used.
         DR. POWERS:  And these PRAs were in some sense certified for
     this particular application?
         MR. BARANOWSKY:  They were not certified.  They had been
     reviewed as part of the IPE process.
         DR. POWERS:  And what we know about the reviews that are
     done in the IPE process is that they serve adequately the objectives of
     the generic letter that initiated the IPE process, but in no sense are
     they what I would call a technical review of how well the IPE or the PRA
     reflects the plant, how good the database is, how complete the scenarios
     considered are.
         MR. BARANOWSKY:  Yes.  We weren't really trying to do a
     plant-specific assessment.  What we were trying to do was get some
     representative models and our belief is that the results from the models
     and the IPEs when one looks at the broad insights are reasonably correct
     in terms of what is important and the kind of levels of core damage
     frequency that one sees.
         In fact, if we look at the results of the action sequence
     precursor program, we see approximately the same order of magnitude core
     damage frequency comparability between that program and what the IPEs
     give us and most of the important sequences are of similar
     characteristics but details do different and there are a few different
     ones, so our feeling was that in terms of orders of magnitude we have
     got a spectrum of plant designs with PRAs that have at least been looked
     over somewhat.  They don't have obvious flaws in them and therefore they
     would be reasonable models to perform sensitivity analyses to understand
     how some of these parameters' variation would affect risk calculations.
         DR. POWERS:  At least when I look at the IPE insights
     program, I see an order-of-magnitude difference between plants of
     nominally the same type.  So you've got an order of magnitude --
     uncertainty band?  I don't know what you call it in this.
         MR. BARANOWSKY:  Yes.
         DR. POWERS:  This context.  Derived from a tool that may or
     may not be qualified for this kind of job.
         MR. BARANOWSKY:  Yes.  I think the reason that it's a
     reasonable tool is because of the way that we used it.  We didn't take
     any one plant.  What we did was we ran a bunch of these calculations and
     then we sort of enveloped them.  And we have no reason to believe that
     the IPEs grossly and in a biased manner underpredict the likelihood of
     core damage for this type of a calculation, but that in some cases
     they're a little high and in some cases they're a little low.  And in
     fact most of the work that we've done in AEOD taking actual operating
     experience and comparing it to IPEs shows that about 90 to 95 percent of
     them have pretty reasonable values when it comes to initiating event
     frequencies and system reliabilities, and that there are a few that are
     unexplained as to why they are higher or lower than what we would get
     from looking at some of the more direct indications from the operating
     experience.
         So it's our belief that taking a set of these, running the
     sensitivities, and more or less enveloping the results probably is
     conservative for this cut at setting thresholds.  Moreover, I think the
     level of risk that we're talking about looking at is, you know, rather
     low.  We're talking about changes in core damage frequency of 10 to the
     minus 5, 10 to the minus 6, for which there's not a significant public
     risk associated with it, and that one doesn't need to be precise in
     calculating these things because we're not using it in that manner. 
     We're trying to get into the ballpark of what we think is an area of
     increasing risk as we move through the different thresholds and
     performance bands.
         DR. POWERS:  Someone might ask that given the uncertainties
     that are associated with risk estimates, is a differential of 10 to the
     minus 5 in core damage frequency detectable?  That is, maybe you're
     looking at just noise.
         MR. BARANOWSKY:  It probably realistically is the noise, but
     it is something that one can calculate.  Just like one can sit down and
     deterministically say a piece of equipment doesn't meet a certain
     regulatory rule, that probably is well within the noise also with regard
     to whether the plant is safe or unsafe.  So I'm not sure that there's
     any difference there.  The point is that we are able to identify plant
     performance characteristics that seem to show increases of these small
     magnitudes, and it's the best tool that we have available for
     identifying where we should be focusing our look.
         DR. POWERS:  Can I ask you about the status of what,
     validation verification, peer review of the Saphire code?
         MR. BARANOWSKY:  Yes, I guess I'd have to ask someone from
     the Office of Research to give you an up-to-date statement on that, but
     my understanding is that that code has been pretty well developed,
     reviewed, and validated by the Office of Research over about 15 or so
     years at least.  I don't know that anyone here is familiar with that. 
     I'm looking around the audience.  I don't know if John is --
         DR. POWERS:  What I know is that the Office of Research has
     in many cases subjected codes that they have sponsored the development
     of to what I think is a rigorous and outstanding peer review process,
     very disciplined, very structured, published reports, and allowed
     developers of those codes to respond to the peer review comments and
     responses from the commenters.  But I am not familiar with such a review
     for the Saphire code.  That doesn't mean it hasn't been done.  I'm just
     not familiar with it.
         MR. FLOOD:  This is John Flood from the Office of Research. 
     We'll have to get back to you on that, Dana.
         DR. POWERS:  I wish you would, because I think this is
     crucial that the Office of Research in developing its PRA tools has
     tools that the rest of the Agency can put an enormous amount of
     confidence in.  Whereas they can have their doubts and qualms about IPE
     PRAs, they should not have any doubts about the capabilities and
     limitations of their in-house tools.
         MR. BARANOWSKY:  By the way, I would like to point out that
     most of the doubts and questions about IPEs have very little to do with
     the specific computer code that was used.  They usually have to do with
     the assumptions and completeness of the model and the tools generally
     give pretty similar results, and that's the way a lot of them have been
     tested out, by benchmarking against each other.
         DR. POWERS:  If you could give me a ten-minute guide on how
     I should reread Appendix I to understand what's been done, I would
     appreciate it, because I have to admit --
         DR. APOSTOLAKIS:  Well, let me ask, what's the process here,
     Mr. Chairman?  We seem to have demolished the presentation.
         MR. GILLESPIE:  Well, no.
         DR. BARTON:  I don't think so.  I was going to ask Frank,
     where are you going from here, because we --
         DR. APOSTOLAKIS:  We will come back to --
         DR. BARTON:  We've done some bantering back and forth and
     trying to get Bruce's input before he to fall off.  I was going to ask
     Frank where does he see going next, we take a break now, or you get into
     this.
         DR. APOSTOLAKIS:  Because I had some comments myself, and I
     don't know when to raise them.
         MR. GILLESPIE:  No, right now what we'd like to do is
     continue the questioning on performance indicators in the oversight
     structure, and then move on to Mike and the assessment process.
         DR. APOSTOLAKIS:  Okay.
         DR. BARTON:  Why don't we finish that, and then before we
     move on to Mike, we'll take the break.
         MR. BARANOWSKY:  Okay.  Well, I mean, the questions --
         DR. BARTON:  Any other questions about performance
     indicators?
         MR. BARANOWSKY:  In this area?
         DR. BARTON:  Do we have any other questions of Pat at --
         DR. APOSTOLAKIS:  I cannot ask any questions.
         DR. BARTON:  Okay.
         MR. GILLESPIE:  I was thinking you were going to continue
     on, Dana, beyond -- the question you've asked also applies, and in fact
     the Commission asked -- David Lochbaum asked it -- relative to the stuff
     Bruce was doing.  I'm sorry we don't have Bruce in line, but we've got
     John Fleck here.  And that was how did you pick the inspectable areas to
     be risk-informed given you had to average across PRAs if similar plants
     could come up with different analysts with different results.
         And John, there was kind of a process of averaging -- it's
     kind of gross, but we did this -- on two plants and less than all of
     them for picking sequences that led to areas, and could you just in a
     few minutes --
         MR. FLECK:  Yes, I'm John Fleck from the Office of Research. 
     We looked across IPE the top ten sequences across plants to draw
     insights and put those into a matrix called RIM 2.  Basically it was to
     pull out the information that drives risk out there onto the table, so
     when we are choosing inspectable areas we could somehow link the area to
     those risk insights.
         So we did that -- the insights were generated, as Frank
     said, using IPE information that we brought in house through the IPE
     data base by scanning across the sequence, the dominant sequences at all
     plants, and the insights contained in NUREG-1560.  And the process more
     or less evolved around -- there was about five of us involved in looking
     at the inspectable areas and weighing it with that information in the
     IPE as well as basic information like how much effort would it take even
     just to go look at that particular area.  So there's always some minimal
     amount of time that you'd have to spend on it.  And that's where we came
     down from 150-some-odd areas to something like 26 in the reactor safety
     area.
         Does that address the question, Frank?
         MR. GILLESPIE:  Yes.  Basically what he's describing is kind
     of an averaging process that went across the PWR designs, which meant if
     a sequence showed up in the top --
         MR. FLACK:  Top ten.
         MR. GILLESPIE:  Top ten at at least two facilities.  There
     was more detail to it.  At a couple of facilities, didn't have to show
     up anyplace, then it got to influence an inspectable area, which means
     the inspectable areas at one plant might be from a sister plant of
     similar design where a different analyst found something more important. 
     So it's kind of a generic averaging is kind of how I put it in my
     lexicon.
         Now the next step is when you apply those insights to
     plant-specific, and this is a step where I'm trying to develop in the
     next phase, is the areas are defined, there will be a minimum look at
     these areas, now customize your look to the specific sequences from that
     facility.
         And it's the kind of a process of attempting, if you would,
     in a gross sense, to take the best of the average and say here is the
     areas you need to look at, but when you pick your specific sample, you
     need to look at your specific plant and make sure you have hit your most
     significant sequences at that plant.
         It is not a perfect system because we didn't have perfect
     IPEs and PRAs, but it was a way of trying to deal with the strengths of
     the PRAs and the weaknesses in a process way.  And the research reports
     are in draft form.  They are kind of in for final comment right now.
         MR. FLACK:  Yeah, we have got them for out for comment.  We
     should be able to finalize, certainly, within the next two weeks.
         MR. GILLESPIE:  But the next step is, now, how do you go
     from the generic averaging to the specific application and not lose your
     generic insights, but, yet, do some successful customization, as you
     pick your sample, to the high probability sequences at that facility, or
     high risk sequences?  It is a challenge.  And we are dealing in gross
     levels.  This isn't a highly -- a high fidelity system we are in.
         DR. POWERS:  Is this an exercise undertaken with a pad and
     paper or undertaken with computer?
         MR. FLACK:  The exercise of gleaning out the insights was
     done basically by --
         DR. POWERS:  No, I am talking about the step from going --
     you have got these generic inspectable areas.
         MR. FLACK:  Pad and paper.
         DR. POWERS:  And now you want to customize it.  Is it -- is
     that reasonable to have it as a pad and paper exercise, or should there
     be other kinds of tools in the hands of the people to do this the way
     you envision?
         MR. GILLESPIE:  Again, in kind of the macro sense that we
     are dealing in, pad and paper, given we have nailed it down, we have
     kind of boiled it down to a matrix kind of approach, and then you can go
     to the specific facility and we know to the best we know the dominant
     sequences at those facilities.  It is now a matter of saying, what
     systems are predominant?  And if I have to look in the mechanical area
     in a mitigation system, what is the system I want to be sure at my
     facility will stop a sequence?  What is -- and those risk important
     things are all really available for that most part right at the site.
         So making that match in pad and paper, I am not sure I would
     computerize it.  That is a subjective match of kind of the generic stuff
     that, if you would, that we have put together and the site-specific PRA
     results.
         Right now we are not thinking about putting any kind of
     fancy algorithm together to try to do it other than do it.  We also have
     to focus on the people doing it.  The people doing it are likely going
     to be the senior resident inspector and the resident staff, with some
     assistance from the SRA.  And you are looking at about 20 sites per
     region, and a lot of up-front planning to do this right.
         So if an SRA spends -- if the senior resident spends about a
     week, and of that week, he gets a day of the SRA's time to overview his
     recommendations and results, that is probably in the ballpark of the
     expertise that is likely to be on most facilities.  So that can't
     complicate it too much.  It has got to be very user-friendly.
         Again, not to say in the future we won't try to computerize
     it, but to get something done this year, this is the best we
     conceptually could approach it with.
         DR. POWERS:  I was certainly not thinking about this year, I
     was thinking --
         MR. GILLESPIE:  Okay.
         DR. POWERS:  -- ten years from now.
         MR. GILLESPIE:  Yeah.  I think, again, then you are to the
     point of probably having site-specific profiles.  And the degree of
     sophistication ten years from now, and someone will have settled the
     fidelity of the PRA question, I hope, by then.  We will either have an
     ANSI standard or someone will have a standard that these will be
     measured against, and then we can move forward.  Lacking that, this is
     the approach we have taken to do it.
         MR. FLACK:  There is one other thing, what we have raised,
     we have discussed -- we are discussing with Frank, is developing these
     generic matrices to make them plant-specific.  So, it would be a matter
     of going to the site and finding out what additional sequences would
     have to be considered and trying to fold that into a matrix form for
     each site, so each site would have its own risk set of insights that
     then would be updated and used by the inspector, but these are just
     things we are entertaining at this time.
         DR. POWERS:  I think there is no question that that will be
     one of the most interesting aspects of the pilot.  How do we go from a
     generic kind of appendix that, quite frankly, I don't understand right
     now, but that is my fault, to the specific activity will be real
     entertaining.
         MR. GILLESPIE:  Yeah, because then you are going from, as
     Bruce said, areas that contain multiple systems to specific systems that
     are going to be sampled.  And we also view that as a significant
     challenge, because the directions on how to do that right now are
     lacking
         DR. BARTON:  Frank, is this the time to take a break?  Till
     25 after and then we will hear from Michael.
         [Recess.]
         DR. BARTON:  I think we have got a quorum here, so we will
     get started.  Michael, you are next, with the assessment process.  I
     don't know if there's any other questions for Pat or not at this point,
     I don't think so.
         MR. JOHNSON:  Okay.
         DR. BARTON:  We will find out if he leaves, though.
         MR. JOHNSON:  Okay.  I just have a couple of brief slides to
     put up, and I don't want to -- or I hadn't intended to go over the
     entire assessment process.  We did that the last time we met with the
     ACRS.  I just wanted to remind us that the purpose of the assessment
     process is to assemble the information that we get, both from the PIs
     that we are collecting and from the inspection, the risk-informed
     baseline inspection, plus the additional inspection that we do, assemble
     that information to arrive at conclusions regarding plant performance
     and then, based on those conclusions, to identify resultant regulatory
     actions and then communicate the results of those, those actions, and
     the results of the assessment to the public.
         And then, finally, we want to provide a means of follow-up
     such that we look at future performance, factor that into the assessment
     process and make adjustments as appropriate, based on whether the
     licensee is or is not responding and addressing the problems that we see
     as being important.
         The second slide I will put up is this slide I will call key
     concepts.  I just wanted to remind us that the process is based on
     thresholds.  We have done a lot of talking this morning about thresholds
     for the PIs.  Also, keep in mind that we will have thresholds -- a means
     to evaluate the significance of findings from the inspection program and
     thresholds associated with that in each of the cornerstone areas.  And
     it is those thresholds that really drive us, or put us in the position
     where we consider additional action beyond allowing just the licensee to
     take action based on their performance.  So it is very much driven by
     thresholds.
         The assessment process considers the performance for a
     12-month rolling window.  That is not new, we have been talking about
     that for a while.  It is a rolling 12-month period of time that we are
     looking at, PIs being provided each quarter, and the inspection results
     will come in as we accomplish inspections.
         The assessment process provides a graded approach.  One of
     the things that is in Attachment 4 is something called an action matrix. 
     And in looking at that matrix then, and with the PI results and the
     inspection results, we have a graded way of looking at what we
     communicate to the licensee based on performance, the actions that we
     take, whether we do inspection, so and so forth.  All of our actions,
     our management involvement are graded such that plants who perform
     better get, if you will, the minimal level of engagement by the NRC. 
     Plants who perform -- whose performance crosses multiple thresholds get
     a greater degree of involvement by the NRC.
         The assessment process eliminates use of the watch list and
     superior performer recognition.  That was one of the things that even
     the IRAP concept proposed, and we thought it was important then, and we
     think it is still important that we do away with things like superior
     performer recognition and the watch list, we don't think they serve a
     purpose.
         And, finally, one of the things that we needed to do with
     the assessment process was look at plants that are in an extended
     shutdown.  Extended shutdown has an impact on the PIs, as you are well
     aware, and the inspections, the additional inspections that we do, we
     think it is necessary to do, with this assessment process that we are
     proposing, exactly what we did today, and that is to set those plants
     aside for this assessment process.
         And we rely on the Inspection Manual Chapter 0350 process
     that has restart approval requirements for us in terms of what we will
     look at to decide whether a plant is ready to restart, and we will make
     some logical decision about when that plant should rejoin the assessment
     process following their restart at the end of that extended shutdown
     period.
         Let me just put up very briefly the action matrix.  I
     mentioned this in terms of talking about the fact that the assessment
     process provides for sort of a graded range of actions.  If you will
     look across the top, that gives you the results from anything from a
     plant that would receive --would not cross any threshold, any PI
     threshold, or any inspection threshold, all the way across to plants who
     have crossed multiple thresholds.  And then if you look down the left
     column, we talk about management meetings, the things that we do in
     terms of response, management meetings, licensee actions, NRC inspection
     and regulatory actions, and also the communication.  And so, for
     example, if you look in the left column, you can see that a plant that
     hasn't crossed any threshold, a plant with all the indicators and all
     the inspections in the green band, simply receives a risk-informed
     baseline inspection program.  And, again, as you move to the right, the
     responses becomes graded in significance, increasing significance.
         Let me mention one last thing, and I will shut up and take
     whatever questions you have.  There's --
         DR. APOSTOLAKIS:  Mike.
         MR. JOHNSON:  Yes, sir.
         DR. APOSTOLAKIS:  Would you put that back on?
         MR. JOHNSON:  Okay.
         DR. APOSTOLAKIS:  Where can I find it here?  Where is it, do
     you remember?
         DR. WALLIS:  Twenty-five.
         DR. APOSTOLAKIS:  Twenty-five.
         DR. WALLIS:  In the handout.
         DR. APOSTOLAKIS:  No, I know.
         MR. JOHNSON:  It's Attachment 4.
         DR. BARTON:  Attachment 4, George.  Attachment 4, page 19, I
     think.
         DR. APOSTOLAKIS:  Well, page 9 of the --
         MR. JOHNSON:  And page 9 of Attachment 1.
         DR. APOSTOLAKIS:  Attachment 1.  Page 9 of Attachment 1.  I
     am wondering, under Roman II there, one or two inputs white, but the
     cornerstone objective is fully met.  Why don't we move the regulatory
     action of the commenting response and make it a licensee action, and we
     do nothing?  And I am not even sure that the SRI or the branch chief
     should meet with the licensee.
         MR. JOHNSON:  I think the answer to your question is, when
     we cross a threshold, and that is what we are talking about in that
     second column, we would do some follow-up inspection in that area,
     whether it is a PI or whether it an inspectable area from the
     risk-informed baseline inspection program.
         And so when we talk about documenting the response in the
     graded area and in the inspection report, what we are talking about is
     just making sure that that inspection report captures what the inspector
     has found in that additional inspection that follows up in that area. 
     We would do that.  It is sort of an obvious outcome of that inspection
     that you have done to address that fact that you have crossed that
     threshold.
         DR. APOSTOLAKIS:  Well, yeah, I understand that, but I guess
     what I am saying is in the name of granting relief, to both them and us,
     why don't we start getting involved when one of the cornerstones is
     degraded?  In other words, your third column, Roman III, you have one
     degraded cornerstone, so now the NRC is beginning to provide increased
     oversight, you know, that kind of thing.  It is really a matter of
     policy, but I am just asking why we can -- can we not do --
         MR. JOHNSON:  Yes.
         MR. GILLESPIE:  You're right, George, it is a matter of
     where you set the thresholds, and how much margin you are going to have
     from you get in a more interdictive mode.
         DR. APOSTOLAKIS:  Right.
         MR. GILLESPIE:  And we have basically set the threshold.  In
     fact, if you look in Reg. Guide 1.174, there is a graph in there that
     looks very much like the graph Pat uses but it is upside down.  And we
     are, in essence -- we are, in essence, adapting those same thresholds. 
     The bottom band or the least risk significant band in that Reg. Guide,
     where things would be turned over to licensees, is very consistent with
     the description of what we have got in the licensee response zone, or
     the green band.  And so what we did was try not to invent new policy,
     but adapted that perspective.
         DR. APOSTOLAKIS:  Yes, but in that guide, though, they are
     dealing with core damage frequency in there.
         MR. GILLESPIE:  Yes.  And --
         DR. APOSTOLAKIS:  Here, you are talking about not even a
     cornerstone being degraded.
         MR. BARANOWSKY:  But the thresholds were based on risk
     insights that put us into a core damage frequency level of risk
     comparable to that in Reg. Guide 1.174, where the NRC would provide
     review and approval of changes in licensee design and operational
     features, so --
         DR. APOSTOLAKIS:  Wait a minute.  I mean you can have --
         MR. BARANOWSKY:  That is what the white zone is meant to
     capture.
         DR. APOSTOLAKIS:  Yeah, but you mean you can have one or two
     inputs white, when, in fact, the cornerstone objectives are fully met,
     and see a difference in CDF?
         MR. BARANOWSKY:  No, the -- well, yes.
         DR. APOSTOLAKIS:  Really?
         MR. BARANOWSKY:  Well, I mean you can calculate the
     difference in CDF, it is small.  I mean, for instance, in Reg. Guide
     1.174, it says the licensees can make permanent changes to their plant
     with a CDF change of up to about 10 to the minus 5.  And the white band
     of this performance model includes core damage frequency changes up to
     10 to the minus 5, so it is meant to be consistent with Reg. Guide
     1.174, where there is regulatory oversight of risk-informed applications
     that occur with changes up to 10 to the minus 5.
         The same thing over here, when you cross any one threshold,
     the implications are comparable to being in that kind of a situation
     with regard to changes or increases in risk.
         DR. APOSTOLAKIS:  Well, I would expect that if the
     cornerstone objectives are fully met, even if one or two inputs are
     white, you wouldn't see much --
         MR. BARANOWSKY:  Well, you are not going to see much.
         DR. APOSTOLAKIS:  Unless the plant was already one of the
     great performers, way below the threshold, so by increasing it a little
     bit, you know, --
         MR. BARANOWSKY:  This isn't a regulatory hammer.  This is
     really just the beginning of taking a look so that we don't have things
     go from everything is all right to shut them down the next day.  This
     way the licensee sees it coming.
         DR. APOSTOLAKIS:  Right.
         MR. BARANOWSKY:  There is plenty of warning.  The NRC sees
     it coming.  And, in theory, it is a very modest regulatory engagement at
     that point.
         MR. JOHNSON:  Yeah, and it is very much -- it is very much
     philosophical.  The notion was if the system is set up on cornerstones,
     you don't wait till you have got a degraded cornerstone before you
     engage.  You engage at a point where you are seeing some degradation,
     but not degradation significant enough to take down the cornerstone, if
     you will.
         DR. APOSTOLAKIS:  Yes, but if you look at 3, it says
     cornerstone objectives met with minimal reduction in safety margin.  So
     it is not that, you know, the next category is something that already is
     pretty bad.  You are still meeting the objectives even in Roman III.
         MR. JOHNSON:  Right, but we are talking about --
         DR. APOSTOLAKIS:  What I am saying is doing nothing in the
     first two columns except being informed and then start doing things 3
     and above.
         MR. BARANOWSKY:  Not meeting a cornerstone objective is
     considered pretty significant, it is not the beginning of engagement for
     us.  I mean that is the equivalent of saying I don't have any
     defense-in-depth or something like that.
         DR. APOSTOLAKIS:  No, but you are still meeting it, though. 
     Cornerstone objectives met.
         MR. BARANOWSKY:  No, I understand that.
         DR. BARTON:  Yes.
         MR. BARANOWSKY:  But if we wait until we get to the point
     where things are really broken, that is one of the concerns that was
     raised about not having leading indicators in the current process in
     which one day the plant is operating, the next day we go there and we
     say, oops, we have to shut you down because this is really broken badly. 
     We are trying to have a process here that slowly steps into a
     recognition of declining performance so that there is plenty of
     opportunity to turn that around before we ever get to these level 4 or 5
     conditions.  I mean we never want to get there.
         DR. APOSTOLAKIS:  I know, 4 and 5, I agree.
         MR. GILLESPIE:  Remember, this is an indication system, and
     the severity or the pervasiveness of the root cause to the reason the
     indication has tripped a threshold is really the question, once you trip
     a threshold.  And, in fact, NEI's proposal, which we have spun off of
     here, up-front said if you meet all the PIs that are proposed, that
     should be evidence that your corrective action problem identification
     system is working.  If you trip a threshold, there is at least some
     essence of evidence that your problem identification corrective action
     system isn't working.  That is all we have got.  We have got some
     evidence that problem identification corrective action isn't working. 
     Now, when we get more diagnostic, the question is, how pervasive is it
     or isn't it?  This is an indication where we ask that question.
         So that is kind of one of the fundamental premises that even
     the industry proposed, actually, similar levels.  If you remember, they
     proposed three scrams and we kind of verified that that looked good to
     us on the same basis.  So that is how we are coming at it.
         Could the root cause -- are we good enough to have created
     the perfect system?  No.  So that is how the thresholds were set, and
     that is the intent of the indication.  The primary indication is there
     is a problem with problem correction and identification of problems.
         MR. JOHNSON:  There is one other thought I ought to mention,
     too, George, and it is in Attachment 4, and it mentions the fact that
     when we go to apply this action matrix, one of the first things we would
     do is to consider what the licensee has already done.
         So, for example, if you -- let's suppose you have one or two
     of these areas where the licensee has crossed the threshold.  The first
     thing you would do is to look to see if the licensee had taken some
     actin to do some diagnostics, had taken some action to address the
     problem, and that would further temper what we would do in terms of the
     follow-up inspection that we would plan.  So, we really see this second
     column as one that is really a very measured response, but necessary as
     licensees begin to cross these individual thresholds associated with PIs
     and inspections.
         DR. APOSTOLAKIS:  But look at the first column.  All
     assessment inputs are green, and there is licensee action, licensee
     corrective action.  To correct what?
         MR. JOHNSON:  Well, that simply says if the licensee is in
     the green band, all of these indicators are in the green band, where the
     inspection would find issues, we would simply put those issues in the
     licensee's corrective action program and the licensee would address them
     in accordance with that whatever prioritization scheme they set up,
     because the licensee has a good success at doing that.  This is the
     routine dealing with issues as they arise.
         DR. APOSTOLAKIS:  So there will be issues even though all
     assessment inputs are green?
         DR. BARTON:  Oh, yeah.
         MR. JOHNSON:  Oh, sure.
         DR. BARTON:  The world is not perfect, they are saying. 
     There may be little blibets, but you are still going to be green because
     you haven't crossed any threshold.
         DR. APOSTOLAKIS:  Like what issues?
         DR. BARTON:  I don't know, give us an example of where it
     could be.
         MR. GILLESPIE:  Tag-out problems.
         DR. BARTON:  All right.  Equipment tag-out.
         MR. GILLESPIE:  Equipment tag-out.
         DR. BARTON:  There you go.
         MR. GILLESPIE:  Not safety significant, didn't compromise
     the system.  Inspector sees it, it is a requirement.
         DR. APOSTOLAKIS:  But that is my point, though.  Why should
     you worry about it if all the assessment inputs are green?
         MR. JOHNSON:  And that is our comment is we don't worry
     about it.  That is left to the licensee.
         DR. APOSTOLAKIS:  But why would you even notice?  Why would
     you even bother to look at it?
         MR. BARANOWSKY:  Because you don't know, until you get all
     the facts together, whether or not the facts indicate there is a
     problem.  You can't go and say there is no problem so I won't look for
     facts.  Your job is to go get the facts and then see if they indicate
     that you should have a more aggressive examination or follow-up
     activity.  So the first step is getting the information, that is all in
     the green is about.  Baseline inspection program to get baseline
     information.  Performance indicators, inspections.
         DR. APOSTOLAKIS:  Wait a minute.  All the assessment inputs
     are green, everything.  All the indicators, the areas, right?
         MR. BARANOWSKY:  Right.
         DR. APOSTOLAKIS:  There is nothing to correct.
         MR. JOHNSON:  No, that's not true.
         MR. BARANOWSKY:  The NRC won't be taken a corrective action.
         DR. BARTON:  But the licensee has some actions.
         MR. BARANOWSKY:  The licensee, when they find tag-out
     problems, will be taking a corrective action.  The theory is if you let
     the little things go, they become bigger, and then they become bigger,
     and then they become bigger.
         DR. SEALE:  Indeed, one of the concerns is that by the time
     they reach -- things reach the level of recognition on the NRC's screen,
     you may already be on a slippery slope that may be very difficult to
     turn around.  It is my understanding that INPO is looking at indicators
     at a much more sensitive level, as many as a hundred per plant in some
     cases, and that they would hope that using those indicators, the
     licensees themselves, with INPO's help, would be able to anticipate
     things so that you never let any -- nothing you ever saw got out of the
     green.
         MR. BARANOWSKY:  Yes.
         MR. GILLESPIE:  There's actually -- there's INPO -- WANO and
     INPO are kind of one in this -- and WANO's had a lot of input into an
     IAEA document which is currently circulating in draft -- I don't believe
     it's available for the public -- which takes the next step that we've
     said we wouldn't take.  Not only is it indicators from the facility, but
     it talks about safety culture --
         DR. SEALE:  Yes.
         MR. GILLESPIE:  And it talks about indicators of management. 
     So there is internationally and nationally the industry's themselves
     looking at it.  The IAEA document is a document for station personnel,
     and specifically in the front of it it says this is not intended for the
     regulator.
         DR. SEALE:  Yes.  Yes.
         MR. GILLESPIE:  So there is that activity.
         DR. SEALE:  Yes, but it does address these --
         MR. GILLESPIE:  Yes.
         DR. SEALE:  These culture-related concerns that we've raised
     in the past in some of our discussions.
         DR. APOSTOLAKIS:  Well, again, it's okay for INPO and the
     industry to do things like that.  When we do something, I mean, we have
     to be a bit more careful.  And -- let me put it in a different way.  I
     get the impression that we are changing, you know, it is risk-informed
     and so on, but we still -- we don't want to let go, in other words.  We
     still want to get involved in detail.
         A bolder approach might be to say regulatory actions, none
     in the first column, none in the second.  Just be informed of what's
     going on, and then start worrying about it at 3.  And if they have
     tagging problems, it's their problem, you know.  We don't get involved
     in these things.
         MR. GILLESPIE:  Well, I think, you know, we've kind of got
     -- now we're stuck in the compromise position we're in.  We have a set
     of regulations and requirements currently out there.  An inspector walks
     through the plant, and while he's looking at risk-significant systems
     and risk-significant spaces, and he's looking for safety problems, he
     sees that tags are mistagged and valves are mislocked.  It doesn't rise
     to being -- it doesn't negate a system, doesn't negate a train, doesn't
     maybe rise to that level.  Does he walk away and ignore that and not
     tell anybody?  That's what you're suggesting, George?  We can't tolerate
     that.  He has to document it.
         Now how we dispose of it becomes the question.  And what
     we're suggesting here as the disposition is it gets turned over almost
     as if he was a QA inspector and walking through the same space, he gets
     integrated into the licensee system.  Now we're trying not to put undue
     pressure or undo priority on it, and in fact gets in their system and
     gets fixed in accordance with whatever priority system they're using.
         DR. BARTON:  It's level 4.
         MR. GILLESPIE:  It's like the level 4 policy statement going
     through now.
         DR. BARTON:  Right.
         MR. GILLESPIE:  I mean, that's the approach.  The second
     column on this, the darn senior resident lives at the site.  Him
     finding, asking, you know, why did you bust this threshold, when he's
     already there, doesn't -- is kind of a real measured response.  I mean,
     I would be hard pressed to have someone bust a threshold and have one of
     the two residents we've got at the site say I'm not going to ask.  So
     it's a real measured response.  It's kind of proportional to the idea of
     busting one threshold.
         You have your resident at his normal weekly meeting and they
     generally, although they're at six-week reports, they generally have
     kind of a debriefing weekly, and what their findings are says hey, you
     know, your quarterly report showed this.  What's going on here?  It
     seems like a very measured response.  Could it be none?  I don't know
     how we'd administer none and tell a resident not to ask that question.
         DR. MILLER:  These responses you judge are minimal.
         MR. GILLESPIE:  Yes.  Oh, yes, column 2 is very minimal.
         DR. MILLER:  I don't know how you'd define it, but --
         MR. GILLESPIE:  It is.
         DR. MILLER:  They would just be routine, the senior
     residents walking around or the residents.
         MR. GILLESPIE:  Right.
         DR. MILLER:  And we need to document it, because it does --
         MR. GILLESPIE:  Yes.
         DR. MILLER:  Break the written rule, but it has no effect on
     risk.
         MR. GILLESPIE:  Right.
         DR. SEALE:  You're letting the licensee know that the
     inspector is not brain-dead.
         MR. GILLESPIE:  Right.  The other thing is you're letting
     the public know that we're still looking.  Now we've got another
     audience.  It isn't just us and the licensees.  And, you know, one could
     say well, the inspectors won't bother writing any inspection reports
     unless they found a complete cornerstone completely degraded.  And
     that's not acceptable in the public nature of our business.
         DR. MILLER:  But the burden on the licensee in column 4 is
     probably equal if not somewhat less than it is today.  Is that right or
     not?
         MR. GILLESPIE:  Yes.  In fact, at the Commission meeting --
     yes, column 4 -- at the Commission meeting said --
         DR. MILLER:  Column 2; I'm sorry.
         MR. GILLESPIE:  Oh, okay.  I was going to say the
     Commissioners actually commented on column 4 and 5.  They spent some
     time on this and said move the Commission over to column 4.  That was
     their comment.  So, yes.
         DR. MILLER:  But column 2 is less than it is today, or equal
     --
         MR. GILLESPIE:  Oh, yes.  I would expect that column 2 would
     just crossing one threshold where there's a readily available
     explanation would not be sending a three-man team out to investigate it. 
     The first thing is the SRI, what's happening here.  Do you have a
     reasonable explanation?  If the answer is yes, he documents reasonable
     explanation.  The other nature of the public agency we are, we're going
     to be publishing these PIs quarterly.  We're going to be -- it would be
     irresponsible to publish a PI that shows a threshold crossed and have no
     text that said here's what it was, and it appears to be okay.
         DR. SEALE:  I'm intrigued by that bottom row down there
     where you have a differentiation or a dividing line between regional and
     agency review.  You've got a pilot coming up.  Will all of those pilots
     be in one region?
         DR. BARTON:  They could be in all regions.
         MR. JOHNSON:  No --
         DR. SEALE:  One in each?
         MR. JOHNSON:  Two pilot plants in each region.
         DR. SEALE:  Yes.  Okay.  Some in each region.
         MR. JOHNSON:  Eight.
         MR. GILLESPIE:  Eight pilots.
         DR. SEALE:  Some in each -- okay.  It's going to be very
     interesting to see the degree to which unanimity of what that means
     emerges from the different regions.  And that's going to be a job for
     you, I think.
         MR. GILLESPIE:  Consistency --
         DR. SEALE:  Yes.
         MR. GILLESPIE:  Well, the starting point is we'll have
     something we don't have today.
         DR. SEALE:  Sure.
         MR. GILLESPIE:  We've at least got a first shot at a scale. 
     Now we can adjust the scale in effect.  That's the kind of comments
     we're expecting to get.  But at least we have a starting point to
     measure people against.  One of the major criticisms of the whole
     oversight assessment process was we didn't write our criteria down up
     front, and so here we're attempting to do that.
         DR. SEALE:  You may have heard that from us, too.
         MR. GILLESPIE:  Yes.
         DR. APOSTOLAKIS:  One of the things that still is not clear
     to me, Frank, and maybe you can clarify it, is okay, you go down and you
     do the inspection and the licensee -- the performance indicator on
     scrams is high, above the threshold.  And the licensee says there is
     nowhere in my license that says that I have to keep this below 3.  I'm
     not going to take any corrective action.  It was nine from Day 1.  Why
     are you asking me now to change it?  What is the new regulation that's
     forcing me to do this.  And you say PRA.
         MR. GILLESPIE:  No, we're not going to ask them to change.
         DR. APOSTOLAKIS:  Okay.  You're not going to say PRA.
         MR. GILLESPIE:  No.
         DR. APOSTOLAKIS:  What would you answer -- how would you
     answer?
         MR. GILLESPIE:  At that point we would end up doing
     additional inspection, since he's refused to do a root-cause analysis,
     and the inspection would likely be on his root-cause analysis problem
     identification process, and we would then leak that back to regulatory
     requirements.
         MR. JOHNSON:  Right.  What caused the scrams.
         MR. GILLESPIE:  And if he's not violating any regulatory
     requirements, then you're right.
         DR. APOSTOLAKIS:  That's right.  That's what I'm saying.
         MR. GILLESPIE:  The nine scrams is an indication we need to
     look further and tie that to what requirements are either being violated
     or what new requirements are needed.  We're not going to force him to
     meet the threshold.  But if anyone has nine scrams in a year --
         DR. APOSTOLAKIS:  Let's say you find that no regulatory
     requirements are violated.
         MR. GILLESPIE:  Oh, we could find that, in a difficult
     startup after a long shutdown.
         DR. APOSTOLAKIS:  What was the frequency of scrams in the
     United States 15 years ago?
         DR. BARTON:  About nine.
         MR. GILLESPIE:  Yes, nine.
         DR. APOSTOLAKIS:  Nine a year.
         MR. GILLESPIE:  Eight, nine a year.
         DR. BARTON:  That's in the ballpark.
         DR. APOSTOLAKIS:  Once we're licensed, the guy says it was
     nine then, it's nine now.
         MR. BARANOWSKY:  Yes, but we have both Appendix B and we
     have the maintenance rule, and both of those regulatory requirements
     force you to take corrective actions when your reliability or
     performance is not up to what's expected.  In the maintenance rule we
     have some targets in it for you to meet.  And clearly this would be
     outside of what would be called for in the maintenance rule, and
     certainly Appendix B has elements of quality in it that this would
     challenge.
         DR. APOSTOLAKIS:  Are the targets of the maintenance rule
     utility-selected?
         MR. BARANOWSKY:  Yes.
         DR. APOSTOLAKIS:  Okay.
         MR. BARANOWSKY:  And I can tell you one thing, they're
     certainly less than nine.
         DR. SEALE:  Yes.  I think there's another way to look at
     this.  Nine years -- or ten years ago we had nine scrams.  Ten years ago
     we had the inspection system that we're still living with today.  When
     we don't have nine scrams a year, maybe it's time to take a look at the
     inspection system and bring it into more of -- well, in tune with what
     the practices are today.  But you still use that as the thing that opens
     the door to the kind of inspection that is more reminiscent of what you
     have today if somebody does show up with nine scrams.
         DR. APOSTOLAKIS:  My fundamental problem is that the basic
     regulations that people have to comply with are not risk-informed.
         DR. SEALE:  True.
         DR. APOSTOLAKIS:  And the inspection oversight program is
     risk-informed, and nobody sees a possible conflict here?
         MR. GILLESPIE:  No, we do.  We do.  In fact, we highlighted
     that in the Commission paper as a conflict, because we are right now out
     in front of the regulations.
         DR. APOSTOLAKIS:  Yes.
         MR. GILLESPIE:  Absolutely.
         DR. APOSTOLAKIS:  We are changing the way we oversee things.
         MR. GILLESPIE:  And --
         DR. APOSTOLAKIS:  Without knowing why.
         MR. GILLESPIE:  No.
         DR. APOSTOLAKIS:  We have empirical knowledge --
         DR. BARTON:  So we shouldn't improve this process --
         MR. GILLESPIE:  We know why we're changing, but -- and
     that's the difficult part.  The only step we can take is, we can't
     enforce thresholds.  Thresholds are an indication as to when we go
     further.  And then we're forced, if you would, into making a link of
     what is the requirement that links to the root cause.
         MR. JOHNSON:  We're forced to use the existing regulatory --
         MR. GILLESPIE:  Structure.
         MR. JOHNSON:  Structure to take action against the licensee.
         MR. GILLESPIE:  I would love not to have to do that, but
     that's where we are.  That's our only option.
         DR. KRESS:  The link is that you're assuming that you have a
     body of regulations there and you're assuming that if a plant does meet
     all those regulations that are out there, then it is more likely in the
     green zone?
         MR. GILLESPIE:  That's inherent in the assumption.
         DR. KRESS:  And so you've got indicators that you don't have
     to go in and see if they meet all the regulations, you just go look at
     these indicators.  Now if somebody gets out of them, your assumption is,
     well, more than likely they don't meet some of these regulations, so
     we'll go in and see.
         MR. GILLESPIE:  Right.
         DR. KRESS:  And you may find that they still meet the
     regulations.
         MR. GILLESPIE:  That's true.
         DR. KRESS:  Or you may find that they don't.  And your
     response is going to be depending on what you find by that additional
     inspection.
         DR. APOSTOLAKIS:  What do you think the response --
         DR. KRESS:  So I think they're forced to go in and their
     response has to be well, do they meet the regulations or not?  And maybe
     those regulations will change over time, and so forth, but I think
     they're right in saying --
         DR. APOSTOLAKIS:  What do you think the response --
         DR. KRESS:  That they're boxed in to saying meet the
     regulations.
         DR. APOSTOLAKIS:  What do you think the response will be if
     they find that they meet the regulations but two of the performance
     indicators are above the threshold?
         DR. KRESS:  That's a good question.  That's where the
     conflict's going to be.  What do you do about that?  I think they have
     to backfit or --
         MR. GILLESPIE:  No --
         MR. BARANOWSKY:  That's so highly unlikely, because what we
     currently have today is performance that would be well within the green
     band in which we still cite licensees on Appendix B violations.  It's
     exactly the opposite problem as the real world than you're postulating
     right now occurring in the future.
         DR. SEALE:  There's also another response, and that is while
     they may be complying with what the regulations state, they're certainly
     not meeting the utility peer group's aspirations for excellence, and the
     other utilities are going to be all over them, because they have the
     same indicators, they're looking at them for the same kinds of concerns,
     and nine is not a satisfactory way of doing things in their world where
     they think in terms of excellence and performance.
         DR. MILLER:  And that peer pressure by some is perceived as
     being more of a hammer --
         DR. SEALE:  That's right.
         DR. MILLER:  Than is the regulatory pressure.
         DR. APOSTOLAKIS:  What kind of world are you guys
     describing?  Plato came again down here and I didn't realize it.  This
     is so platonic.  This is the ideal world.  You know, you will be shamed
     and your peers will come and correct you?  Has that happened before?
         DR. SEALE:  It's happened before.
         MR. BARANOWSKY:  Actually the PUCs are also important here. 
     When the PUCs end up seeing something happening, there's a question as
     to whether or not --
         DR. SEALE:  That's right.
         MR. BARANOWSKY:  There should be changes in rates and so
     forth.
         MR. GILLESPIE:  And all of that, we're still an independent
     regulatory agency, and we're setting our -- the fact that our thresholds
     and our perspective is coincident with some societal pressures is very
     nice in a system point of view, but clearly these are our thresholds in
     our action statements.
         DR. KRESS:  When you come to enforcing things, I think you
     really are bound by the existing regulations.
         MR. GILLESPIE:  Oh, yes.
         DR. KRESS:  If you end up having this conflict, George,
     where -- and I'm sure it can exist -- where the risk performance metrics
     appear to be way out of line and they're still meeting all the
     regulations, I think you've got a problem with the regulations.
         MR. GILLESPIE:  The ideal piece where this would be
     explained away --
         DR. KRESS:  Fix the regulation.
         DR. BARTON:  Fix the regulation.  Right.
         DR. KRESS:  But that's a -- you know, that's a backfit and
     that's a rule and you have to make new rules.  But I think that's where
     you have to attack.  If you find this to be a real problem that's
     pervasive, I mean, I think you can stand it --
         DR. APOSTOLAKIS:  Or you can live again with an unfortunate
     situation like after the IPEs go down we realize that 19 PWR units were
     above the goal but they meet the regulations.  There's nothing we can
     do.  Right?  Is that a happy situation for us?
         DR. KRESS:  Probably not.
         DR. APOSTOLAKIS:  Probably not.
         DR. BONACA:  I just have a comment I'd like to make.
         DR. KRESS:  The backfit rule is one of the things that's got
     us into this box.  You know, if you can't satisfy the backfit
     requirement, you can't make a new rule.  And that's one of the things
     that have us in this box.
         DR. BARTON:  Mario?
         DR. BONACA:  Yes, I had a comment regarding -- one thing
     that, you know, looking at these performance indicators, in fact, I
     don't know how many plants would not meet these performance indicators
     quantitatively.  I mean, I haven't seen many plants that have more than
     three scrams per year, I mean, in fact.
         And the concern I have is the reverse one, which is the one
     that rather than being an objective process again there is a lot of
     reliance on the complementary or supplementary inspections.  And so the
     burden becomes again on the staff to show that in fact it is an
     objective process that you are using.
         Maybe the question I have is how did you get these
     thresholds, and they seem to be in fact much more relaxed than what you
     have from INPO.
         DR. APOSTOLAKIS:  In fact, they go to page H-18.
         DR. BONACA:  I know.
         DR. APOSTOLAKIS:  You will find one, two, three, four, five,
     six, seven, eight plants whose number of unplanned scrams for a year is
     above the threshold.
         DR. BONACA:  Yes, that's right.
         DR. APOSTOLAKIS:  By design.  So what do these eight plants
     do?
         DR. BARTON:  What they'd better do is have a corrective
     action program to reduce the amount of scrams, but if you're shut down,
     you're not going to be economically feasible, pa da pa da pa da, down
     the path you go.
         DR. APOSTOLAKIS:  As a result of oversight?
         DR. BARTON:  Pardon?
         DR. APOSTOLAKIS:  As a result of oversight they have to do
     that?
         MR. GILLESPIE:  No, there's going to be logical explanations
     in some cases.  If you have a plant that's been shut down for two years,
     of which we happen to have several, and you go to start up, it is not
     unprecedented even in the first month you would have three, four, five
     scrams.  But you'll generally identify exactly what it was very rapidly,
     and you'll correct it and move on.  That's an explanation.
         DR. APOSTOLAKIS:  Incidentally --
         MR. GILLESPIE:  It means we have to understand and explain. 
     It doesn't mean we have to take action.  And the licensee just has to be
     able to explain it and it has to make sense and be logical.
         DR. APOSTOLAKIS:  I don't know.  I mean, that is related to
     Dana's question earlier about not regarding the PRA.  I don't know how
     NEI produced these figures.  Was it based on one year's performance?
         MR. BARANOWSKY:  No.
         DR. APOSTOLAKIS:  Was it based on a lot of years'
     performance?
         MR. BARANOWSKY:  No.  It was based on I believe five years'
     performance and it ran from approximately 1992 through '97.
         DR. APOSTOLAKIS:  So for each plant it was the same
     denominator, five years.
         MR. BARANOWSKY:  It was on a per-year basis, but they
     collected for that five years, and then they took the peak value for
     each plant in that five-year period.
         DR. APOSTOLAKIS:  Oh.
         MR. BARANOWSKY:  Now there is a couple of things that are
     important.
         One is performance has been improving over that period and
     since that time, so there's a reasonable likelihood that there would be
     less hits in any given year than would be indicated by the 95 percent --
         DR. APOSTOLAKIS:  Because I was wondering if you have five
     years' worth of data for each plant, I mean the codes are available
     now -- a two-stage Bayesian thing would be --
         MR. BARANOWSKY:  We got 'em.
         DR. APOSTOLAKIS:  You have them already?
         MR. BARANOWSKY:  We can apply a lot more sophistication than
     we did.
         DR. APOSTOLAKIS:  Yes.
         MR. BARANOWSKY:  But we kept it very simple for this
     go-around.  We wanted to keep the mathematical eloquence to a minimum so
     that it could be more easily understood by the everyday inspection kind
     of person and licensees without going through a lot of arguments about
     is this a good computer code, is that a good mathematical method, is the
     prior proper and so forth -- we wanted to get away from that for now
     because we just wanted to try this out.
         It will evolve into something a lot more sophisticated as we
     get more plant-specific and more risk-based and we improve on the
     indicators, but for now, it is a very simple approach.
         The thresholds there have minimal risk implications.  We ran
     some calculations using those 13 models I told you about and for that
     particular threshold we didn't see anything that approached the 10 to
     the minus 5 change in core damage frequency from what would be a mean
     type of performance over that period of time, when we moved up to that
     threshold.
         In some cases we put all the plants' performance at the
     threshold to see if the risk was high, so it was a relatively low risk
     associated with operating within that performance band, and relatively
     low risk, what I mean there is what we have today.  That is low risk.
         Increases in risk when one moves up to those limits are
     small enough that they wouldn't cause us to be concerned with regard to
     the kind of thresholds for risk applications Regulatory Guide 1.174.
         MR. JOHNSON:  So let me just tell you what we'll see --
         DR. APOSTOLAKIS:  Before we go to that though, let's talk
     about the substance.
         MR. JOHNSON:  Which one, George?  Back to the matrix?
         DR. APOSTOLAKIS:  This is pages 14, 15 of the Attachment
     2 -- cross-cutting issues.
         MR. JOHNSON:  Yes.
         DR. APOSTOLAKIS:  Can we discuss those?
         DR. BARTON:  Go ahead.
         DR. APOSTOLAKIS:  You have a very nice discussion of safety
     culture which is under the name Safety Conscious Work Environment, but
     then you dismiss it -- "No separate and distinct assessment of
     licensees' safety culture is needed because it is subsumed by either the
     PIs or baseline inspection activities."
         Does everyone agree with that?
         DR. BARTON:  Well, I think they are driven to that based
     on -- it's the policy of the Commission is if you don't go and inspect
     safety culture, management effectiveness and all that stuff I think you
     are forced into this approach.
         DR. APOSTOLAKIS:  Why don't we say that?
         MR. BARANOWSKY:  That's not really -- I mean that may be one
     reason why you get to that conclusion, but that's not why we got there
     when we did this work.
         The real reason we got there is we're looking at outcomes
     that affect the objectives of the cornerstones that we identified.
         DR. BARTON:  Okay.
         MR. BARANOWSKY:  And our conclusion was that this is an
     underlying cause to outcomes.  As long as we measure the outcomes, which
     is the performance indicators and the inspections, and if those outcomes
     are acceptable we don't have to go and check into whether or not they
     have the right attitude at their plant.
         DR. APOSTOLAKIS:  So you are measuring human performance?
         MR. BARANOWSKY:  Sure.  People make mistakes and cause
     plants to trip.  People make mistakes and cause equipment to fail or be
     unavailable.  To the extent that those things don't occur, we are
     satisfied with the human performance.
         Now thee are some things that we can't measure with the
     performance indicators like how will the operator respond to a real
     accident.
         That's where we would have inspection or related activities
     associated with the operator licensing program.  That is a complementary
     area for inspection.
         DR. APOSTOLAKIS:  Well, let's see what you are saying about
     post-initiator operator actions.
         You admit that they are far less frequent -- that's page
     14 -- while initial and requalification examinations provide a
     predictive measure of operator performance during off-normal and
     emergency operations, follow-up inspections of risk-significant events
     will provide a more direct indication of the adequacy of post-initiator
     human performance.
         In other words, you are saying that, look, we really -- I
     mean there is no way we can know how the guys are going to perform.
         MR. BARANOWSKY:  Actually, what we are saying is that we
     have had a very successful operator qualification and training program
     and that unless we have objective information or evidence given to us as
     a result of abnormal events that really challenge the operations and
     show that they didn't handle the situation properly, we are going to
     rely on that program to give us confidence that that is an acceptable
     area of licensee performance.
         We had some incidents occur in which operators have made
     some errors during these incidents, and those would be the things that
     we would look into to see whether or not those are an occasional human
     error problem or whether they are indicative of deeper problems with
     their program.
         DR. BARTON:  I think inspectors go look at, if you put those
     into the corrective action program, they go look at what is going on at
     the site with respect to improving human performance and reducing
     operator error.
         The inspectors look at that, so that's in real life.
         DR. BONACA:  Yes, the industry is moving, however, in a
     direction where you can group elements of human performance,
     safety-conscious work environment, problem identification in safety
     culture and you have very clear indicators of it.
         For example, you can look at operator challenges, temporary
     mods, operator work-arounds.  Those are typical indications of poor
     culture if you don't correct those, or for example, corrective action
     programs that doesn't solve you issues or doesn't have the proper root
     cause.  That is an indication of that.
         They may be worthwhile in fact at some point to group this
     other indicator that you have inside here under some heading -- because
     the industry is moving in that direction and I think they should be
     encouraged to move in that direction, and I think something could be
     done inside here.
         MR. BARANOWSKY:  In fact, we would encourage them to do that
     kind of thing because that is like an early indication of an indicator,
     if you will.
         DR. BONACA:  That's right.
         MR. BARANOWSKY:  And what they are trying to do is maintain
     their performance above that threshold and so by having these kinds of
     additional performance indication or inspection activities that they
     perform themselves, which go really beyond what the NRC does, they have
     the opportunity of controlling their own destiny.
         DR. BONACA:  That's right.
         MR. BARANOWSKY:  That's really the idea that we are trying
     to foster here is that the NRC does one to five percent of all the
     inspections and evaluations that go on at a plant.  The licensee does
     the other 95 or 96 percent or whatever it is, and it is their ability to
     do that stuff correctly that we are trying to take sample information on
     to see what the indications are.
         MR. JOHNSON:  Right.  In fact, the industry was very
     interested in looking at the things that you have just mentioned.  They
     were also very interested in having us not look at them if they don't
     reflect themselves in actual performance, and so that is sort of how we
     have come out.
         DR. BONACA:  But you do have significant expectations in
     safety-conscious work environment and to me it seems you should
     translate those expectations in measurable things.  You know, I gave you
     some example of those because there are relationships there, so I
     understand the interest of the industry also not having you quantify
     some of those, but to the degree to which you are interested in those
     specific assessments, some correlation with the specific metrics could
     be important.
         DR. APOSTOLAKIS:  When the Commission decided that it's none
     of our business to look at safety culture, they interpreted safety
     culture the way you interpret it.
         Let me be a little more clear.  If you read reports from
     Vienna on safety culture, safety culture consists of two elements they
     say.  One is the organizational structure and the other is the
     attitudes, the norms and so on.
         You seem to be making a distinction.  You are saying
     safety-conscious work environment, otherwise known as safety culture,
     really refers to the norms and attitudes because you still worry about
     the organizational structure on the problem identification and
     corrective action programs.
         What is not clear to me is whether the Commission also
     thinks about it that way.
         DR. BARTON:  That's a good point.
         DR. SEALE:  Well, didn't they really say don't look at
     management --
         DR. BARTON:  Don't look at management effectiveness and
     competence.
         DR. APOSTOLAKIS:  But what does management mean?  It means
     your work processes and the way you manage business or only the
     managers?
         DR. BARTON:  I think it is the former.
         DR. APOSTOLAKIS:  Well, then, how come we have a full
     section here on problem identification that says the scope of problem
     identification corrective action programs includes processes for
     self-assessment, root cause analysis, safety committees, operating
     experience, feedback and corrective action.
         These are processes.  They are part of the organization
     structure, so from this document I get the impression that this is
     outside safety culture, whereas the INSEC documents is part of the
     safety culture.  You have to make sure you have good processes and so on
     and so one, which is fine with me, but now that explains the
     Commission's decision because I am a little puzzled by the decision if
     it includes processes for self-assessment root cause analysis and so on.
         I mean obviously we do care about these things --
         DR. SEALE:  It's my impression, George --
         DR. APOSTOLAKIS:  -- so these are not safety culture.
         DR. SEALE:  It is my interpretation, this personal
     interpretation, that they don't -- do not want to be in the position of
     having appeared to have told a utility, whether it has a vice present or
     a -- whatever -- and exactly how many people report to that person and
     whether their training organization reports to a Nuclear Executive Vice
     President for the whole organization or the guy in charge of a given
     plant, those kinds of questions.
         DR. APOSTOLAKIS:  Okay, so somehow we have to make a
     distinction.
         DR. BARTON:  George, I think if you look at this, this is
     really looking at safety culture, if you are looking at the thing that
     you read.
         DR. APOSTOLAKIS:  Yes.
         DR. BARTON:  If you look in the corrective action program,
     self-assessment, root cause analysis, safety committee, you are really
     looking at what is the safety culture of that organization.  You really
     are.  I don't care what you call it.
         DR. APOSTOLAKIS:  And that is what INSEC said, but the
     previous section here said "a safety-conscious work environment, also
     referred to as a safety culture" and then it concludes by saying, "In
     short, no separate and distinct assessment of licensees' safety culture
     is needed."
         Then there is immediately another section that says, ah, but
     for processes that have to do with identification and correctional
     programs, we worry about those, so I think a lot of this is semantics.
         By the way, I don't disagree with you and that clears up a
     lot of things in my mind because I think Bob is right, Bob Seale, that
     the Commission did not want to appear as if they were interested in
     which Vice President reports to the President.
         DR. SEALE:  And the driving force behind that is that there
     are documentable cases in the past where certain people who are
     employees of the Commission maybe overstepped their authority and
     appeared at least to have interfered with that sort of thing and it did
     raise a considerable amount of ire and fury and heat.
         DR. APOSTOLAKIS:  Yes.
         DR. BARTON:  That came out in the Towers-Perrin Report.
         DR. APOSTOLAKIS:  And there are other parts of the safety
     culture that I do worry about.  For example, operator training and so
     on.  Yes -- in the broad definition, that is part of it, so we really
     are excluding here, and I think rightly so, the attitudes, goal
     prioritization, things that you can't really see, right?
         MR. GILLESPIE:  Right.
         DR. APOSTOLAKIS:  So maybe just change the words a little
     bit to make sure that that is what you are doing.     
         This is not really -- safety-conscious work environment, the
     way you describe it here, is not also referred to as "safety culture."
         MR. GILLESPIE:  I don't think we used the words "safety
     culture" but if we did we shouldn't have.
         DR. BARTON:  Yes, you do.
         MR. GILLESPIE:  We do.
         DR. BARTON:  You have got it thrown in there a couple of
     times.
         DR. APOSTOLAKIS:  Page 14.
         MR. BARANOWSKY:  That's a throw-away.  I am glad to have
     something the ACRS thought of something here.
         [Laughter.]
         DR. SEALE:  You put it in quotes so it is --
         MR. BARANOWSKY:  I admit it -- we made a mistake.
         DR. APOSTOLAKIS:  I am not asking you to admit you made a
     mistake.  It's good that you did.
         [Laughter.]
         DR. APOSTOLAKIS:  But it is a very important part though
     that you really worry about -- in fact, if you go to a plant or your
     resident inspector realizes there is no mechanism, there is no process
     for operating experience feedback, he probably won't like it.  Right?
         DR. SEALE:  Well, any root cause analysis that doesn't
     address concerns having to do with human action is idiocy.
         DR. APOSTOLAKIS:  Which is most root cause analysis.
         MR. BARANOWSKY:  When you cross the threshold and the
     resident inspector goes and says why is this occurring, and they see
     that they don't have a good root cause analysis activity, then you are
     getting into these things, but that is a cause of the problem kind of
     thing.  That's why we are saying you won't have an indicator for that by
     itself because if you did I mean that would be fine but we don't know
     how to relate some of these things to the cornerstones directly.  There
     isn't a simple algorithm for doing that.
         DR. SEALE:  The thing you detect is the consequence, not the
     action.
         MR. BARANOWSKY:  Right.
         MR. GILLESPIE:  And in fact independent of which cornerstone
     or which PI or which inspection activity might cause us to engage
     further in doing a root cause analysis, the end of the root cause
     analysis might find that problems exist in places not first indicated,
     because at that point you are in an analytic mode and how pervasive is
     it is the kind of question you are asking, how limited is it, was it
     isolated, was it not isolated, so you roughly go from one kind of
     inspection process and procedure to a different one when you get to that
     point.
         DR. BARTON:  I have seen you put your light out.  Does that
     mean you are done?
         MR. JOHNSON:  That means --
         MR. GILLESPIE:  I think he gave up on the last viewgraph. 
     He's not going to put it up.
         [Laughter.]
         MR. JOHNSON:  We have moved beyond the last one.  I can put
     it up but I don't see a point.
         DR. APOSTOLAKIS:  I am done with my questions, Mr. Chairman.
         DR. BARTON:  You are done with your questions?  All right.
         Throughout the document, and I think you mentioned this
     morning you still have got some work to do and there's some not just
     shoe-strings to tie but you actually have some other open issues that
     are mentioned throughout the document.
         When do you plan to have that supplemental information
     ready?  When can we hear?  Will that be after pilot programs, after this
     thing goes out for comment?  What is the timing on when we'll hear what
     is missing in here?
         MR. GILLESPIE:  Let me ask Al Madison to come up -- Alan is
     the Team Leader for the Transition Task Force -- and ask him to address
     part of the schedule.
         DR. BARTON:  Is that the last slide you weren't going to
     show?
         MR. GILLESPIE:  No.  This is one that wasn't even on the --
         MR. MADISON:  I am Alan Madison with NRR.  We have met
     before.  Your specific question, so I can address it?
         DR. BARTON:  You go through the document and it says thee is
     more work to be done; we've got to do something here -- throughout, all
     right?
         My question is when will you fill in those blanks and when
     will you have that work done?
         MR. MADISON:  Well, I am glad you recognize there is a lot
     of work to be don, a lot of work left, and we're, as the paper alludes
     to, we are going to implement or establish a task force which we have
     begun to do.  Because of the broad scope of activities that are involved
     there will be a lot of folks involved in that task force.
         Some of the items -- Mike, you'll probably want to list the
     key transition tasks.
         These are some of the key tasks we think are involved in
     there and you probably noted a few others.  But we have broken it out
     into the work remaining and developing the procedures and the policy
     documents that are going to need support both the inspection procedure
     and the assessment procedures.  Enforcement policies also fit into
     there.  The performance indicators, how we are going to collect the
     performance indicators, how licensees -- the format they are going to
     report them in.  What are we going to do with it?  How are we going to
     get that out to the public?  All that is yet to be developed, the
     support infrastructure for those changes, as well as other changes in
     various areas.
         We have got a lot of training to do, we have got
     development.  Some of the jobs that were left to be done, including --
     well, it was called benchmarking, we are now really calling that a
     feasibility study.  Is the process -- is it feasible that we can take
     the performance indicators, that we can take inspection data and compare
     those and come up an assessment in the cornerstone areas?  We plan to do
     that during the transition.
         One of the biggest things, and that would be the next slide
     with the transition strategy.  One of the biggest things we recognized
     right away is there is a lot of communication and change management
     issues that we need to deal with in this.  And so we, as part of
     establishing the transition task force, we have named what we call a
     change champion who is going to be kind of our voice out in the industry
     and the agency, and to the public, and that is Sam Collins, Director of
     NRR.  He will be our primary communicator of what we are doing and where
     we are going.
         To assist him, to help him out in that, and to help us out,
     we are in the processing a change coalition, which we feel are the -- we
     are hoping these folks are what we are calling the opinion leaders in
     the various areas, and they may or may not be in management.  We are
     focusing right now on the regions, but we are also going to develop some
     folks here in headquarters that will be on that change coalition.  We
     see them as our primary source of information back from the field, how
     the transition, how the processes are being perceived by the people in
     the field, and changes, recommended changes and improvements that we can
     make to the process.
         We are also going to try to leverage off of this a group of
     individuals, the senior individuals in this change coalition, who may be
     able to provide some management oversight and some management control
     for the process implementation that we go through.
         We also recognize there is industry and other external
     stakeholders, such as the states, that we are going to have to
     communicate with and bring on board.  And we are developing some -- or
     we are working with some opportunities, I guess, where we can take some
     action in that area.
         One of the transition task force members will be -- that
     will be his primary job, to look after the change management and the
     communication issues that we have.  Another member of the task force
     will look at the training issues that we have and develop training for
     the staff.  And we will probably end up developing some joint training
     with industry to get the plants up to speed on the performance
     indicators, how they are going to report them, and how we are going to
     -- what we are going to do in that area.
         Now, the schedule.  There's what we see as our major
     milestones as we go along.  Now, throughout that time period we expect
     to be communicating and keeping the ACRS informed, and we will try to
     schedule some opportunities that we can do that.  We look at having a
     Commission paper sometime in the March timeframe and receiving final
     Commission approval on the recommendations, not on the whole package. 
     We are not going to have all the details filled in.
         We plan to start a pilot in June.  Now, to support the
     pilot, we are going to have to have a good draft of the inspection
     procedures and the program documents that will support those inspection
     procedures.  And we are also going to have to have provided some
     training to those selected pilot plants and the NRC staff that is going
     to be involved in that, that includes the senior resident inspectors,
     the branch chiefs, and probably some DRS personnel, and others in
     various areas.
         And we are also looking at, in October, providing a workshop
     that would be a joint industry-NRC workshop, public workshop, to provide
     training for everybody involved.
         Does this sort of answer some of your question?
         DR. BARTON:  That's part of it, Alan, but I was under the
     impression that you owed the Commission some more supplemental paper,
     some more documents by next month sometime.
         MR. MADISON:  The March paper, I think, promised -- the
     March paper is -- the promised March paper has to deal with completing
     some of the items, I think, and the Office of Enforcement is to come up
     with a recommendation, a final recommendation of where they went ahead
     with the enforcement policy.  I think we are going to come up with a
     better draft.  We didn't provide them any documentation on what we are
     going to do with the inspection data to relate it to performance
     indicators, and we owe them a document that at least outlines the
     concept that we have and gives some direction of where we are going with
     that.  I am not sure how much detail we owe them on that.
         DR. BARTON:  Well, are we going to get to see any of this
     stuff?  Do you need a letter from us before this goes out for public
     comment or what?  Where are we in your kind of chain here?
         MR. GILLESPIE:  Let we tell you where are in the cycle of
     things.  It is out for public comment.  To maintain this aggressive
     schedule, it went out for public comment before the Commission even got
     it, and the Commission went on faith that the staff had a good document,
     which was -- I appreciate that because it allowed us to move forward.
         We will be going back to the Commission, I believe the
     meeting is tentatively scheduled for March 18th.  About the first week
     in March, we will be going back with a smaller paper -- we are not
     sending 400 pages back up -- that will deal with the exceptions and the
     comments.  And that will include dealing with any comments now we get
     from an ACRS letter that comes to the Commission from this meeting in --
         DR. BARTON:  February.
         MR. GILLESPIE:  February, I forget when the date is.
         DR. BARTON:  Fourth, 5th, something like that?
         MR. GILLESPIE:  Fourth or something.  Yeah.  So the ACRs
     letter will be factored in to this comment period.
         DR. BARTON:  We are going to really -- we are really
     commenting on this policy.
         MR. GILLESPIE:  Yes.
         DR. BARTON:  SECY-99-007, January 8th document.
         MR. GILLESPIE:  Yes.  Yeah.  And now the hole, the real big
     hole that the Commission focused on was this subjective or potentially
     rule-based scale that an inspector could rapidly assess his results
     against and know whether -- I just turn this over to a licensee, or does
     it warrant further pursuit.  That was the significant hole that was
     identified.  We are going to be working on that.
         As a minimum, our best insights on that will have to go in
     the March paper.  As a maximum, maybe we will get it done sooner and get
     it out for comment to everybody, including the ACRS for information and
     comment.
         DR. SEALE:  When the March paper is available, we will get a
     copy of that?
         MR. GILLESPIE:  Yes, absolutely.
         DR. POWERS:  Mr. Chairman, I think we should devote some
     time to this letter.  I think we have to go beyond just the details and
     the specifics, but point out what we think are the strengths of the
     approach that they have adopted in what I think is an extraordinary
     piece of work.
         DR. BARTON:  Yes.
         DR. POWERS:  And that stands as a template for a lot of
     other activities that the NRC is undertaking.
         DR. MILLER:  However, you are going to -- you recommend we
     comment on the process of getting to this document as well as the
     document itself?
         DR. POWERS:  That's right.  I think that -- I think some of
     the ideas that emerged in this document are strokes of genius.
         DR. SEALE:  They are worthy of emulation.
         DR. POWERS:  And could be used in a lot of other contexts to
     get over some of the rough spots of moving in the direction of more
     risk-informed regulations.
         DR. SEALE:  I have one comment --
         DR. POWERS:  Other than that, we ought we praise the
     document wherever we can.
         MR. GILLESPIE:  Well, we also appreciate, because this sets
     us up for the future.  I promised research, you have always got to give
     them a plug, this is the beginning, and we know there's weaknesses in
     the approach that deserve future consideration.  So any insights as to
     where, first, future consideration might go would also be appreciated. 
     Because once this goes in place, the next step is figure out to make it
     better.
         DR. SEALE:  I have one comment on the schedule or the
     outline of future action you made.  I heard about task force members or
     team members.  I heard things about piggy-backing or some term like
     that, some administrative or leadership, or something like that in all
     of this process, and I never did hear any word about the regional
     administrators.  And it seems to me that while this is going to be in
     the pilot stage, only a small part of what is going on in each region,
     that it is very important that the regional administrators be brought up
     to speed on this process very early, because they can either make it or
     break it.
         MR. GILLESPIE:  Yes, absolutely.  And that is -- I hope
     nothing I said led you believe that we weren't going to involve the
     regional administrators.  Yes, and we are wrestling with that.  And the
     agency right now is a quandary, and the quandary is we have taken on, as
     an institution, what the consultants tell us we probably shouldn't have,
     all at once.  We are reorganizing.  We just went through a massive
     number of retirements in the management ranks, including the regions. 
     And at the same time, at least as far as the regions are concerned, we
     are maintaining a current program and we are trying to completely turn
     it upside down.  And we are trying to do it all at once.
         DR. BARTON:  Business as usual.  Right.
         MR. GILLESPIE:  Yeah.  And it has put strains on the system. 
     While we would have liked to have actually had a regional administrator
     volunteer to come in and work with us for six months and run this show,
     I would have liked the relief, it wasn't in the cards, because two of
     the regional administrators are basically new.
         DR. SEALE:  Well, it is not in the nature of those beasts.
         MR. GILLESPIE:  Yeah.  But we are fostering, as part of the
     change coalition, and we are trying to -- we have been brainstorming
     this.  I will throw this out on the table.  This is a concept.  I won't
     even say I have talked it over very much with people, at least we have
     touched it with Sam, and that would be putting a senior level committee
     together which might be deputy regional administrators, regional
     administrators, or projects division directors, who are the guys who
     really have to make this work when we get it out -- one person from each
     region who is kind of the senior member of this concept of a change
     coalition -- and we wouldn't be on it, and, actually, subject ourselves
     about once every two to three weeks to using them like a murder board,
     where we would spend one day briefing them, and a half-a-day letting
     them tear us apart.  And at the Commission table at the end, in March,
     we would ask the chairman of that group to come and sit at the table
     with us.
         Now, that is kind of a conceptual concept that we are
     looking at.  We haven't really run that by the regions yet, and they
     haven't volunteered for it.  But that is kind of the degree of thinking,
     as we are putting the task force together, that we are going to.  So any
     suggestions, any -- that is kind of the gelling, it is as close as we
     have come right now, conceptually, to how we would both get critical,
     very, very critical -- Can this be applied?  Can we do it? -- advice,
     and have them at the table to say, hey, they took my criticisms and they
     told me why not, or they took them into account, here's my concerns that
     still exist.
         DR. SEALE:  I have another question I want to ask, and it
     goes back to -- I don't know, for some reason, I have always had a
     picture of autonomous fiefdoms as you move from one region to the other,
     and I don't think that is an entirely erroneous picture.  In this -- in
     that chart you had up there earlier, you talked about the dividing line
     between regional review and Commission review.  In all of these mergers,
     acquisitions and things like that that have gone on, how many cases do
     we now have where an operating company has plants in more than one
     region?
         MR. GILLESPIE:  I couldn't answer.  It is getting worse.  I
     mean with Entergy buying Plymouth, we have got Region IV to Region I,
     which is the maximum extent.  I don't know.  We could look it up.
         DR. SEALE:  Yes.  Well, it is interesting to me because it
     seems to me that that -- and I am not sure how you get it, but it might
     be interesting to try to find out, in the eyes of a victim, whether or
     not they see equal treatment from the regions.
         MR. GILLESPIE:  Yeah, that's an interesting point.  It has
     happened, by the way.
         DR. SEALE:  I am sure it has, but, you know, I think it is
     something that the regions ought to be aware of as a reasonable and
     perfectly correct source of performance data that they have to be
     prepared to live with.  Just a wild suggestion.
         MR. GILLESPIE:  Yes, and actually putting on another hat. 
     Within the same branch, we are putting together a program performance
     group of four or five people.  We used to do this about four or five
     years ago, got strapped for resources, couldn't do it.  And with the
     demise, if you would, of the end of the A&E program, we now have some
     people available to get back into more of a routine -- How is the
     program going?  And that is a good point, because we would be visiting
     some utilities and plants, and I hadn't necessarily, I don't think
     anyone has suggested that we ask that particular question.
         I am going to guess there is a half-a-dozen companies.
         DR. SEALE:  I would think so, yeah.  Yeah.
         MR. GILLESPIE:  That cross some line, some way, shape or
     form.
         MR. MADISON:  There is a -- NEI is having a user survey
     which they are going to survey pre-implementation of this process and
     post-implementation, which may address that question.  We are going to
     try to utilize that survey.
         DR. POWERS:  I want to go back to your question about the
     cultural change that you have to make within the agency itself.  You are
     not the first institution to undertake a re-engineering of its
     procedures.  It has gone on long enough that there is a body of
     management science that has been assembled on the difficulties of these
     things.  I, myself, am unfamiliar with that literature, but I wondered
     if it had anything that was of use to you.
         MR. GILLESPIE:  Yes.
         MR. MADISON:  Definitely.
         MR. GILLESPIE:  In fact -- okay, go ahead, Alan.  He is the
     holder of the textbook, he has got the formula.
         MR. MADISON:  Well, I don't know if I have got the formula
     or the answer, but we are trying to get the answer now.  We are
     utilizing the contractor that the NRC does, Nick Mann, as well as his
     subcontractors, to give us some insight into where we ought to go.
         As I said earlier, we have one individual who is going to be
     on the task force, August Specter, who has a background in this area. 
     He is going to be the focus for this so that we have somebody who is
     really concentrating on this full-time.  He and I are going to go to --
     actually, outside to the industry, to Salem, to try to get some input
     and some insights from the changes that they have gone through and the
     processes they have used.
         Yeah, we are going to try to collect some of this body of
     knowledge and use it.
         MR. GILLESPIE:  Well, no, no, no.  Think about this, and it
     is funny, the people from Salem were very, very open with us, and they
     said we have done it twice and we messed it up the first time.  And, so,
     I am not sure that they might not be kind of -- when you are going for
     insights, and they gave us some interesting insights and documentation
     already which has caused us some thinking.
         One was people want to hear from the first-line supervisor. 
     They don't want to hear from outsiders.  Which is interesting, because
     the traditional way NRC does things is we kind of collectively as a team
     would have gone out to the regions and lectured the first-line people
     with the supervisors in probably the same group.
         And you start to develop well, we've got to go talk to the
     branch chiefs and division directors separately first and make sure not
     that they're saying yes, we'll do what you're telling us to do, but yes,
     we understand the underlying principles and can explain it to our staff. 
     So some of the insights of what Alan's been doing are already -- it
     slowed us down a little bit, because we said we need to think about
     this.  If you do it in the wrong order, you could mess it up badly.
         DR. BARTON:  I thought it was a surprise.
         DR. SEALE:  There's a variation on that that the INPO people
     use in their plant assessments now, their visits, namely they use local
     peers --
         DR. BARTON:  Right.
         DR. SEALE:  On their visiting team, and the presentation of
     the results of the visit is made by the local peer.
         DR. BARTON:  Right.
         DR. SEALE:  Which is kind of the same thing.  The guy that's
     telling you the finding knows your plant.  He's not somebody from Mars
     that came in and --
         MR. GILLESPIE:  Yes.  And maybe we should have known this
     right away.  We didn't.  And to me that was, if nothing else, an
     extremely valuable insight that said slow down, understand the steps,
     adjust who you're talking to, you want to talk to different people with
     different levels at different times.
         And then you have to say what are they going to do with the
     information?  And the first-line supervisor, division director, do
     different things.  So we're trying to do it right, and in fact we're
     looking at the literature and trying to use the right experts to give us
     the input.
         DR. POWERS:  I mean, I think that's the answer is simply not
     be arrogant enough to think that you know all the pitfalls to this,
     because I think more than one company has had serious difficulties
     making these kinds of changes.
         MR. GILLESPIE:  Yes.
         DR. POWERS:  The other thing is, of course, to recognize I
     think the personality of your organization and the personality of the
     Salem organization are about as orthogonal as two organizations can be. 
     So I laud you for taking the words from them, but I caution you they may
     not be 100 percent transferable.
         DR. MILLER:  They say they messed it up the first time.  I
     don't think the second time the story's been told totally yet.
         DR. POWERS:  Yes.  I mean, this is --
         DR. MILLER:  We don't know the result in the second time.
         DR. POWERS:  Right.  This is a classic theory x and theory y
     organization.  They're just not going to mesh well.
         MR. MADISON:  But they do have some interesting concepts
     that we want to find out what they -- how they really made or are making
     them work and see if we can apply them to our own case.
         DR. POWERS:  I think you're doing the right thing, going and
     collecting information from wherever you can and then use that part
     that's useful to you and throw away that part that's not.
         MR. MADISON:  Exactly.
         DR. POWERS:  That's all you can do, and pray.
         MR. MADISON:  We may not have the answer yet, but we're --
     this is probably the first time that in my recollection with the NRC
     that we've made this large of an attempt at managing the change rather
     than just making it.
         DR. POWERS:  The other piece of wisdom that I've heard
     frequently expressed in this area is that it takes three years, that if
     you try to bring about a change in the culture and do it in six months,
     and then we'll make another change at the end of that six months, you
     make no change at all.
         MR. MADISON:  Um-hum.
         DR. POWERS:  And it's -- it is so tempting for managers to
     say okay, here's the time line when we will have made this change, and
     lay it down by the calendar rather than the personality of their
     organization, and I think that has been one of the biggest discoveries
     people have made is that --
         DR. MILLER:  Each organization has a different personality. 
     I'll guarantee any university will take a lot more than three years.
         DR. BARTON:  Three to five years --
         DR. MILLER:  Some organizations might take less.
         DR. POWERS:  Well, I think you have certain advantages in
     some organizations would change and certain you don't, and NRC is likely
     to be slow, simply because conservatism has been the watchword of the
     way you do things, not only within this agency but in all of nuclear
     power, the whole thing is think carefully about making a step.  Always
     make sure you've got your foot on firm ground before you take a step
     forward.  I mean, we tell people to do this all the time, and that
     infects the nuclear business, whereas in Silicon Valley they don't --
     not too concerned about whether they're floating on air or not.
         DR. MILLER:  Dana, wouldn't you expect with all the changes
     in the upper management they reflect and so forth, that could be in some
     cases disadvantageous, and in other cases advantageous in this culture?
         DR. POWERS:  I think you're right that whenever you change a
     high-level boss, people anticipate change and are more open to it.  On
     the other hand, they're always suspicious.
         DR. MILLER:  But aren't there like somebody reflected
     there's fiefdoms, I think Bob Seale, at the different regions, which is
     probably true.  Now you've had two regional administrators who are
     probably in control of those fiefdoms depart.  If the fiefdoms are going
     to change, it's a good time to change them, if indeed that's what we're
     going to do.
         DR. BONACA:  I have one more question regarding the report. 
     I realize that I got this report, I've seen it for the first time just a
     few days ago, so -- and the question I have is the report sets
     essentially four objectives for this change in the inspection process. 
     One is improved objectivity, scrutability, risk-informed, and also a
     secondary goal seems to be the reduction of burden.  And clearly reading
     it scrutability is improved, risk-informed is there, okay.
         The only thing that I was asking was, I would have liked to
     see some comparison of the areas of inspection that we used to do before
     that would not be done by the new process and vice versa to convince me
     that in fact we have enhanced objectivity and also that we have reduced
     burden.  Without that it was a little bit difficult to make a judgment. 
     And I wonder if you have done that kind of comparison, just as a
     minimum, you know, large pockets of perspective, what we're not going to
     look at anymore that we used to look at before, and vice versa, what is
     new that we were not looking at before.
         MR. GILLESPIE:  Yes, let me -- we actually kind of -- I wish
     I could give a flat yes or no -- we actually did a check going the other
     way, and the question was would this program as proposed in the areas of
     inspections catch the things that we've seen at plants in the past which
     are not covered by PIs.  And therefore you still see a large -- almost
     the same emphasis as we have today on engineering.  So the engineering
     area didn't change very much.
         For the other areas, we did it, but didn't write it in, and
     there's a chart in there that talks about the risk-informed baseline and
     section, it comes up to like 1850 hours.  We actually have a table
     that's in the inspection manual today that we check each year to see
     that we're kind of in the ballpark on the current core inspection
     procedure, which is kind of the equivalent.  And for a two-unit site,
     that comes out to be about 2200 hours.  The best estimate on if you did
     the sampling process and looked at the things that are in here was about
     1850.  So we are anticipating that it will be as a first estimate about
     20 percent less.
         But we didn't write that down as a formal report.  But it's
     really comparing this table to the table that's in there.  And that was
     done.
         DR. BONACA:  The main concern I had was that, you know, I
     see a lot of restructuring.  The question is, are we really changing
     what we're going to look at?  And if we are, what are we going to lose
     in the process to gain in that process?
         MR. GILLESPIE:  Yes.  If there was a major area of loss, and
     this may sound counterintuitive, but I think in the whole set of
     inspectable areas, the major reduction is in operational areas.  I'll
     give you an example relative to maintenance.  The current procedures
     would cause maintenance procedures to be observed on some periodic
     basis, and I think it's once every two months or something now.
         Well, what's the point of that?  What's the safety
     significance of it?  I mean, those kind of questions are asked.  I think
     you'll find in the inspectable areas that the focus is on observing the
     postmaintenance testing to assure that the proper characteristics will
     work, which doesn't mean sitting there eight hours and watching a
     maintenance procedure.  But it does mean watching the test procedure,
     which will be much truncated.
         The PIs tended to be very operational PIs, numbers of
     scrams, and risk insights that Research came up with focuses that --
     John, help me out -- some what percentage of the risk actually looks
     like in real experience came from maintenance, wasn't it?  You used to
     have a pie chart you showed --
         MR. FLACK:  Yes, that was a -- and actually that was the big
     discrepancy.  A lot of the events, the risk-significant events that we
     were observing through the S program was coming from maintenance and not
     so much from operations.
         MR. GILLESPIE:  Operations.
         MR. FLACK:  And I think that's where you're seeing the
     change, the tradeoff now is they're going to be looking more at
     maintenance and less on operations.
         MR. GILLESPIE:  Yes, and the way we look at maintenance in
     this process becomes different, hopefully more intelligent way of
     looking at it rather than just the gross approach of we're going to
     watch this guy do maintenance on a valve from beginning to end.  And you
     know when we watched him do the maintenance we didn't hang around to see
     the postmaintenance test?  Because that wasn't in the procedure.
         DR. POWERS:  You have to advertise this.  I mean, you just
     really have to advertise that you've taken advantage of these insights
     that you've been getting out of your past inspection program.  I mean --
         MR. GILLESPIE:  And some of this is practical insights.
         DR. BONACA:  I'm saying that as you go forward in a
     transition, I think it would be helpful to have somewhere a brief
     discussion of this, because you are leaving behind issues or reviews
     that may be significant, and the question is you have a burden of proof
     that this transition in fact is effective --
         MR. GILLESPIE:  Yes.
         DR. BONACA:  And also you have a burden of proof of closing
     on these objectives that you set in your, you know --
         MR. GILLESPIE:  Right.
         DR. BONACA:  And the other thing is that it would be
     certainly useful to the industry and certainly to me to understand the
     differences there and to really -- it would give me much more comfort
     that in fact the objectivity is truly improved in the process.
         MR. GILLESPIE:  Let me -- I'm only hesitant because this is
     an opportunity lost, you might say.  If this was a research project,
     literally doing a kind of a more academic research project, an
     experiment that doesn't work still is a successful experiment, and it
     tells your hypothesis was an incorrect hypothesis.  And some of us only
     look at -- only if it proves a hypothesis is it successful.
         And when Bruce was putting together the program, he went
     through a process of identifying over 100 inspectable areas which
     basically encompassed the program as his group who were actually
     inspectors, sorry, today.  And so they basically took today's program
     and cut it up into all these pieces, and we ended up with about 40 areas
     when you implied all the insights.  What we lost was they didn't write
     down the results of the brainstorming sessions which necessarily caused
     them to filter those things out.
         And I don't know how recoverable that is, but I don't mind
     going back and asking the question, and going back through some early
     drafts.  Steve Stein was on the group, and I talked to Steve, and I
     said, Steve, did you save the file folders to show the traceability of,
     basically, how do you go from everything we look at now to what you look
     at here, and why?
         DR. BONACA:  As a minimum, if you had written that here, I
     would have been more comfortable.
         MR. GILLESPIE:  Yes, and that was a reaction a number of us
     had when we started reading, and we said, oh, no, we left off the
     negatives.  We left off the reasons why we left things off.
         We will look at the file folder and see if it is
     recoverable.  I don't want to promise it, that it is.
         MR. FLACK:  Just to clarify, they are not really lost.  I
     mean for the baseline inspection, it does a certain level of
     inspections, but, again, you revisit these depending on your performance
     indicators and findings.  So it is not totally lost, it is just that you
     will get into it after something else leads you there.
         MR. GILLESPIE:  Yes, but what is lost is -- What did we
     trade off?  What would we be inspecting today that we are not going to
     inspect a year from now because we have this performance indicator?
         MR. FLACK:  As part of the baseline.
         MR. GILLESPIE:  As part of the baseline.  And that, I am
     afraid, is lost.
         MR. JOHNSON:  But I almost think a better way to do it, or
     at least as good a way to do it, and one that we had anticipated, was
     that we will go look -- we have talked a lot about success criteria, and
     one of the success criteria is do we meet the objectives.  And I think
     it is going to be important that we go back a year from now, and we look
     in terms of what it is we are actually inspecting.  And what did the old
     core provide?  What are we looking at now?  Do we have a sense that we
     are more objective?  Do we have a sense -- I mean have we hit our
     targets in terms of burden reduction?  Have we hit those things that we
     outlined for ourselves?  And we have promised to do that all along, and
     the Commission is holding our feet to the fire on that.
         DR. BONACA:  The other thing is that I would expect some of
     your resident inspectors would want to know why I am not going to
     inspect this anymore, I think is important.
         MR. GILLESPIE:  And that is going to have to be factored
     into the training.  And I will give you one last bit of insight that is
     hidden in this paper, in Pat's write-up some place, and it goes -- of a
     thousand LERs that came in about a year, only 1 percent, or 10, were
     significant enough to be screened at a 10 to the minus 6 or greater risk
     number, which is significant in that --
         DR. POWERS:  It is not hidden, it is highlighted in there.
         MR. GILLESPIE:  Yeah.  I'll tell you the truth, I have
     talked to a lot of people who said they read the paper and they missed
     this sentence, and I remembered it, because it stuck.
         DR. POWERS:  It stands right out.
         MR. GILLESPIE:  If that is in any way reflective of also the
     kind of things that an inspector sees as he is walking around a plant,
     then that is going to be the most significant cultural change that we
     will have, that 99 percent of what we have been observing in the
     facilities is likely not risk significant.
         Now, I don't know that I can take that and directly make
     that conclusion, but, certainly, it says that many of the things that we
     see going on day to day, even though they are viewed as incorrect or not
     right, may not necessarily have a risk significance.  That is the one
     most significant cultural change that is going -- that we are challenged
     with right down to the lower levels, because we have trained ourselves,
     we have trained the industry, that this is a problem, regulatory
     significance, procedural compliance.
         DR. BARTON:  Verbatim compliance with regulations, it is not
     quite that bad, but it is almost there.
         MR. GILLESPIE:  So I don't know the answer, but this is the
     change process.  It is not just changing how you inspect and what you
     inspect, but it is the fundamental understanding of why I am just
     turning this over to a licensee, when only a year ago, this might have
     been a level 4, level 3 violation.
         DR. SEALE:  It is interesting.
         DR. APOSTOLAKIS:  We are not requiring verbatim compliance
     anymore?
         DR. BARTON:  I hope not.
         DR. APOSTOLAKIS:  If you read the nuclear safe culture
     thing, we do.  We do.  We do.
         MR. BARANOWSKY:  They might be NCVs, non-citable violations.
         DR. BARTON:  That is how to handle it.
         MR. GILLESPIE:  Anyway, it is that kind of data that needs
     to be put together in an understandable -- that is going to be a
     challenging training course, if you would, or adjustment to the P-111
     course that we currently have.  To get this kind of information, and the
     context, that also fits with what people feel they are intuitively
     seeing at the facilities is going to be difficult.
         Even Sam Collins gave me an incident, it was a morning
     report that came into the EDO.  A BWR flange had a steam leak on it, it
     was going up in power.  A 10 percent power tech spec says check your
     high pressure injection turbine-driven pump.  They let steam in.  Steam
     flange wasn't tightened.  Steam leak.  They turned off the steam, they
     tightened the flange, it continued up in power.  The system got very
     exercised.  It happened to be the most important thing that happened
     that day in the morning report.
         It was not risk significant.  So everyone got exercised all
     the way up through the EDO, you might say, in a communications sense,
     for something that was not -- the system worked.  The tech spec said
     test, they tested.  Yes, they have to go back and find out why weren't
     the bolts tightened the first time around.  And Sam looked at me when I
     said, this isn't risk significant.  He said I can't believe you, go get
     a risk guy.  So I took it down to the risk guy and I said --
         DR. BARTON:  Well, it is going to take years to change that
     mindset.
         MR. GILLESPIE:  Is this significant?  And the risk guys
     says, you know that is not significant.  I said, yeah, but you have to
     write it down for me.  So he wrote it down, he put his name on the
     bottom.  I went back in to Sam, and said, see, Sam, it is still not risk
     significant.  And he just looked at me and said, we are really going to
     have to change the way we think around here.
         DR. BARTON:  You are darn right you are.  It is also not
     only risk, it is also you are afraid that you are going to be accused of
     not knowing what is going on out there.
         MR. GILLESPIE:  Well, and that is -- yeah, and that is the
     other sense.
         DR. BARTON:  That is their part, whether it is risk or not.
         MR. GILLESPIE:  When you start stepping back, if you do less
     inspection, you have less information.
         DR. BARTON:  Right.
         MR. GILLESPIE:  When you are less intrusive, you have less
     knowledge on a day to day basis.
         DR. BARTON:  Yeah.
         MR. GILLESPIE:  It is a different approach.  And it is going
     to be a challenge to get this into the culture of this organization.
         DR. BARTON:  Oh, yeah.
         DR. SEALE:  Somehow you have to bring into the attributes of
     this system the idea that you are inspecting smarter.
         MR. GILLESPIE:  Yes.
         DR. SEALE:  And maybe that 1 percent example is a good
     lead-in to that line, to that point.
         DR. MILLER:  The same number of hours in inspection on the 1
     percent rather than on the 99, is that?
         DR. SEALE:  Well, --
         MR. GILLESPIE:  That is really the focus of a risk-informed
     inspection program is you are inspecting somewhat less, but you are
     actually inspecting the risk important things more.
         DR. MILLER:  That's the whole objective.
         DR. POWERS:  I think it is important to consider a human
     element of the inspector's life, it is what I call my proof-reading
     theory of facility inspection.  I sometimes get to inspect facilities. 
     If I don't -- if I have people working for me developing a facility and
     they don't make a certain number of errors, my inspection becomes very
     poor.  As soon as they cross a threshold of a certain number of errors,
     then I get them all.  And so it seems to me that I would tend to portray
     this as -- it is true, 99 percent of these things are not risk
     significant, but they may well portend a degradation, and you need to
     bring them still to the licensee's attention, it is just that we don't
     do anything about them.
         MR. GILLESPIE:  That was our earlier discussion.
         DR. POWERS:  Yeah, I think we had this earlier discussion.
         MR. GILLESPIE:  Exactly.
         DR. POWERS:  That what you don't want is inspectors being
     bored.
         MR. GILLESPIE:  Yeah, you don't want the inspector stepping
     aside, because what you are going to have -- you are going to miss
     something you should have picked up.  And that is actually the challenge
     to the screening process, which is the big IOU to the Commission of
     inspection results.  Having some false positives when you screen is
     probably okay.  In other words, some things that you do a little more
     work on, you might not have had to do a little more work on, because I
     feel comfortable we are talking about fairly small numbers of things. 
     But having a false negative is totally unacceptable.
         So the challenge in setting up -- it has got to an
     asymmetrical uncertainty in the screening process, and Alan has taken
     this challenge on, so now, officially, he is hooked.  So -- and that is
     why it might end up being kind of a multi-stepped thing, a rule-based
     thing that is looking at certain questions.  Is it a train, is it a
     system?  If it is a program failure, it is kind of like asking a
     question -- Are there enough program failures that the coincident effect
     at any given point in time is risk significant?  That is a real tough
     question to answer, and that starts getting at the sense of regulatory
     significance or not, because it is not -- Was it wrong and fixed, wrong
     and fixed, wrong and fixed?  If it is wrong and fixed, and things don't
     happen coincident to degrade several systems or several trains, then it
     won't hit a safety threshold, yet, you have to give it to the licensee
     because you don't want all those things happening.
         So that, to me, is the one major, major, major intellectual
     challenge in this whole system that we have ahead, is doing that for the
     inspection findings.  Because what I said doesn't depend on numbers of
     findings, it depends on the severity of the finding.  So, in inspection
     space, we may be dealing with the safety significance of the finding,
     not the number of findings.  And, again, that is a different approach. 
     If you have a lot, it has to be safety significant, right, that is the
     intuitive feel.  It may not be.  And it is going to be very difficult,
     because now we are counter-intuitive with our inspection force, and that
     is going to be -- that is going to be real hard.
         DR. MILLER:  So an inspector who found one very risk
     significant situation would get more credit, so to speak, that somebody
     who found a hundred insignificant ones, is that the thing you are
     alluding to here?
         MR. GILLESPIE:  No, it is a matter of -- the general sense
     is, gee, I found eight things wrong in procedural compliance, and the
     typical enforcement letter that would go out, signed by a division
     director, would declare it a programmatic failure, and it would be a big
     deal in today's environment.  I don't know what it would be in
     tomorrow's environment, because if those eight procedural items happen
     disconnected in time and never actually affected a piece of equipment,
     and we are trying to develop a safety outcome score, or goal, or
     judgment, it puts it in a different perspective.  And it is not the way
     we have been trained.
         DR. POWERS:  I have a question about public perception.  If
     I am a member of the public sitting next to this -- living next to a
     plant, and I see less stringency on the part of the NRC, the number of
     level 4 violations has gone to zero, lots of things that on the face of
     it would have been violations today are not tomorrow, and I, as a
     reasonable person, come to you and I say, how come?  And you say, oh,
     they weren't risk significant.
         And I respond, gee, how would I know that?  And you say,
     well, it is real simple, you run the sapphire code with the right kinds
     of inputs and outputs, and you take the things and you do a risk
     assessment worth on this, and you look at your 95 percent confidence
     level on a T-scale with 5 degrees of freedom and utility comes this
     number.
         Do you think you have really persuaded me that you are not
     --
         DR. MILLER:  Have you persuaded anybody on this Committee
     even?
         DR. POWERS:  You're not relieving a burden upon your
     licensee at the expense of increased risk on my part?  And by risk in
     that context, I mean my qualitative perception of the risk.
         MR. GILLESPIE:  Actually, we are working very closely with
     public affairs on trying to answer that question.  But there's some
     flaws in your basic assumption, they are different than our basic
     assumptions.  One, we expect the inspector will write down when he sees
     something wrong.  Now, what we have to talk about is what do we call it. 
     The current enforcement policy change, as yet unapproved, but I
     understand the vote sheets are in, would take all these level 4s and
     call them non-cited violations, would not have us follow up on every
     single one of them, but would have us audit, on an audit basis, to
     follow up on them.  We would expect --
         DR. POWERS:  If I am a member of the public who knows a
     little bit about your processes, what I see is violations used to be
     cited, now they are not.  Violations used to be followed up, now they
     are not.  And when I -- as a reasonable question, I mean I don't think
     it is an outrageous question, before I make any accusation, is I ask you
     why, and you give me this claptrap about risk analyses and things like
     that.
         MR. GILLESPIE:  Like I said --
         DR. POWERS:  What other conclusion can I arrive at?
         MR. GILLESPIE:  Well, I think, first, the intent is that we
     are not going to ignore the items.  Now, what we are doing is calling
     them something different, and that gives us, you might say, a public
     perception challenge to deal with, because we are calling them something
     different, and we will not be following up on every single one we used
     to follow up on.  So we are going to have to very articulate in saying
     why.  We haven't been there yet.
         In fact, one of the criticisms of this whole report, that
     they opened up the meeting with, was a member of the public would have a
     very difficult time reading this report.  It is not user-friendly.
         But let me go back and say why this is better than what we
     have, and that is kind of where we are at right now.  What we have right
     now in judging overall safety of the facility is the SALP process.  The
     SALP process is actually very, very, very subjective, there is no doubt
     about it.  In fact, the insights we would have from this program that we
     have kind of run so far would be that a lot of facilities, or a large
     group of facilities, are run so safely that it is almost impossible to
     discern between them, yet SALP was trying to force itself to discern
     between several very well run facilities because we were obligated to
     write a report.
         This will allow us now to have more information out to the
     public directly related to their operations and more frequently, and
     that is the plus.  The minus from the system is we are not going to
     follow up on every one of the items.  The plus to the system is, by
     publishing the performance indicators, and an interpretation of what
     they mean on a quarterly basis, and something bigger on an annual basis,
     we will be displaying to the public more and current information on the
     safety of the facility.  So we are substituting one process for another
     process.
         How we put that together is going to be a real challenge,
     and how we communicate that to the public is going to be another real
     challenge.  In fact, public affairs offered that maybe we could go talk
     to the League of Women Voters and a whole bunch of other people to try
     to deal with the concerns.
         DR. POWERS:  I think the important thing, I mean I hate to
     be so complimentary, and I will try to avoid that --
         MR. GILLESPIE:  Oh, no, but this is a hard one.  You know,
     it is easy to deal with the darn technical issues.  It is very difficult
     to deal with the translation of what you think is a good judgment into
     you are backing off, you are not doing as much.
         DR. POWERS:  One thing I liked about the draft response you
     articulated here, you went for about two minutes without ever once
     bringing up the word "risk" in that, and I think that is the start of
     formulating a good response, is bring up the risk after you have said
     all those things you did, more information, better information, better
     interpretations and things like that.
         Get to the results of going in this direction and then if
     someone wants to ask you why do you have all this good insight, then you
     can bring up risk.  Rather than saying, oh, well, we have used these
     analyses that it is impossible for me to reproduce, or even understand,
     to focus my efforts so that I get the most bang for the buck.  You know,
     that just leaves me cold as a response, whereas, when you tell me -- ah,
     you are getting much more, you are really getting much more out of this,
     and much -- using much the words you did and don't bring up this arcane
     technology that requires a degree in statistics to understand.
         DR. SEALE:  A little confession may be good for the soul
     here, too.  I have heard that it is, at least in some circles, although
     it is not too popular these days.
         DR. POWERS:  But in both the states that you and I live in
     --
         DR. SEALE:  Very definitely.  And the confession may be, and
     it may take a little bit of homework to verify it, but the confession
     may be that, of all those violations that you had previously cited, the
     ones that you are no longer publicizing -- or going into the record
     with, and so on, were ones that had no impact on -- when you finally get
     around to the point of public risk.
         DR. POWERS:  But I don't think I would try to bring that
     story to the public --
         DR. SEALE:  Not up front.
         DR. POWERS:  -- because he didn't think -- he didn't know
     whether they did or not.  All he knew was that they were being stringent
     and now they are not.  I think -- I really like this idea of no, no, no,
     it is not less, you are getting way more.  I really like that approach.
         DR. SEALE:  Well, we are looking at the things that count.
         DR. POWERS:  I wouldn't even go that far.  I would say we
     are looking --
         DR. MILLER:  You are getting more.
         DR. POWERS:  We have always looked at the things that count,
     and we are continuing to look at the things that count, and we are
     giving you far, far more than you ever got before, because you used to
     get these funny looking SALP scores that meant absolutely nothing to
     anyone on the face of the planet, not even those in the NRC.  Now, you
     are getting all this other wonderful stuff.  I think that is the germ of
     a public -- of risk communication that may actually work.
         MR. GILLESPIE:  It's funny, that's where Bill Beecher came,
     in a lengthy meeting we had with him yesterday, trying to brainstorm. 
     It was tell them what you are giving them, don't --
         DR. POWERS:  That's exactly right.
         DR. MILLER:  Tell them the glass water is getting fuller not
     empty.
         MR. GILLESPIE:  It is getting fuller, not emptier, yeah.
         DR. BARTON:  I think we need to wrap this up, and figuring
     out what do we want to talk about at the full Committee meeting, I guess
     next week.
         DR. APOSTOLAKIS:  We are all here.
         DR. BARTON:  Pardon?
         DR. APOSTOLAKIS:  We are all here.
         DR. BARTON:  Yes, we are all here.  What else?  What --
         DR. APOSTOLAKIS:  Let's take the time to write a letter. 
     Well, I don't think we need a presentation, we are all here.
         DR. SEALE:  Well, the only thing I can think of is if we ask
     them if they could, at that point, tell us what additional information
     you are liable to be able to put in the Commission letter.
         DR. BARTON:  The supplemental information.
         MR. MADISON:  I think at this time it is premature.  We have
     got a draft concept that we are refining and I really want to put it
     through a lot more exercises before we make it -- bring it here.
         DR. SEALE:  Okay.
         DR. BARTON:  What about the policy issues that remain that
     you mention in this document, evaluating interface of Part 50,
     revisiting event response evaluation, revisiting N-plus-one,
     organizational impact.  Is that something that you are prepared, that we
     could talk about, that we could address in our letter?  Since the
     Commission has to make decisions on those policy issues?
         MR. MADISON:  No, the Commission -- those four policy issues
     were a heads-up to the Commission.  And we actually had a fair amount of
     discussion on.  These are not today's decisions.
         DR. BARTON:  Okay.
         MR. MADISON:  They are next year's decisions.  In some
     cases, they are six months away.  But the basic question was, if you do
     go down this path, and there is less inspection, but you keep
     N-plus-one, residents tend to be generalists, not specialists.
         DR. BARTON:  Right.
         MR. MADISON:  If you need engineering specialists and what
     you have got is generalists living at the plants, that gives you an
     organizational problem and causes you to need, if you go down this path
     very far, to step back and say, I have got an institutional problem I
     have to address, not a technical problem.
         The reconciliation or harmonization with Part 50, that was
     our number one issue on the list.
         DR. BARTON:  Right.  Right.
         MR. MADISON:  It is just highlighting to them, we are
     getting out in front and if Part 50 changes their scale, then we are
     going to have to have feedback from that, and they are going to have to
     be cognizant of Commission decisions made.  If the Commission approves
     our basic scale of where our bands are, then it is going to be
     hard-pressed for the Commission to necessarily change their mind without
     some justification later. So approving what we are doing actually has
     future implications that says hey, the bands are set.  Here's the
     licensee response zone for whatever it is.
         So, no, those four issues were a heads-up.  I think they
     need to be dealt with in the course of probably the next 12 months. 
     They can't be ignored, particularly if we are successful --
         DR. BARTON:  That's true.  Right.
         MR. MADISON:  -- with the pilot program, but not in the next
     three months.
         DR. BARTON:  Okay.  What else do we want to hear?  George, I
     think the problem we've got is it's already in the Federal Register,
     right?
         DR. APOSTOLAKIS:  You can take that time to write a letter.
         DR. POWERS:  Or at least a summary from the Commission
     Chairman in order to -- or from the Subcommittee Chairman at the very
     least.  I don't think it would be a bad idea to bring up these policy
     issues even though they are six months down the line, because that is
     about our time schedule for thinking about the subject.
         DR. BARTON:  Okay.
         DR. POWERS:  And it would be opportune for putting it on our
     agenda so that when you come in here, we are not having to spin up from
     zero -- that and a summary from the Subcommittee Chairman would fulfill
     our obligations.
         DR. BARTON:  At the full Committee meeting.
         DR. POWERS:  At the full Committee meeting.
         DR. BARTON:  Okay.
         DR. APOSTOLAKIS:  How much time do we have?
         DR. BARTON:  I don't know.
         MR. MARKLEY:  10:30 on Wednesday.
         DR. MILLER:  One of the policy issues -- it may not be a
     policy issue -- we raised a concern about the fact that the inspection
     assessment process is leading the regulatory situation.  Is that an
     issue you want to raise?
         DR. POWERS:  Yes.  I think that's a --
         DR. MILLER:  I am beginning to think that maybe one could
     look at the advantages of this leading the regulatory process -- if we
     want to look at the half a glass of water fuller.
         DR. POWERS:  And I will remind you --
         DR. MILLER:  It might lead them in the right way --
         DR. POWERS:  I would still like to know about your
     validation of your tools for your Appendix H and Appendix I analyses.
         MR. GILLESPIE:  That's an IOU we'll take away and get back
     on Sapphire Code and what happened to that.
         DR. WALLIS:  Well, I took a vow of silence when I came in
     this morning.
         DR. POWERS:  Why?
         DR. MILLER:  It's about those hundred Senators being quiet.
         DR. WALLIS:  But Dana woke me up when he said how would the
     public perceive this, and I think it is not just a question of public
     perception but too many of these changes, which I think you have done
     very well, appear to be motivated by what is good for the utility or
     what is good for the administrators in the NRC, and it would be good if
     somehow it could be put in perspective to give a really fair measure of
     what has the effect of all this been on nuclear safety.
         I don't know how you would do that but really that is a
     measurement I would like to see or get some better feel for.  I like
     what you are doing, but how do I tell if it's been good for nuclear
     safety or are we backing off or what?
         Maybe we back off but it's still better for nuclear safety,
     but how do I know?
         MR. GILLESPIE:  Yes, that's -- let me explore that a little,
     because this is kind of a key point underlying the paper.
         In the paper it talks about this is a process to ensure that
     safety is maintained so we are clearly not putting something in place,
     and from the way Pat described his statistical and the way the first
     threshold was set, we are clearly trying to maintain what we believe is
     an acceptable level of safety.  We are not forcing it to be better.
         Quite honestly, I don't think we could force less than the
     .8 scrams on the average per plant that exists today, so how is this
     good not to do that?  That is a public perception problem.  You mean the
     NRC is not forcing it to be better and better and better and better?
         I don't have a good answer for that because we haven't
     digested it.  We understand it but that is kind of one of those things
     that is hidden in the paper.
         DR. KRESS:  Well, let me take a stab at that at the risk of
     breaking my vow of silence.
         DR. POWERS:  What is this vow of silence?
         DR. KRESS:  This process we are going through could lead up
     to this bit of a problem that George pointed out that you will find some
     plants for example that may get outside your inspection lines but then
     you go in and look at it more closely and find that they still comply
     with all the regulations, and what you will have done there is perhaps
     this particular plant while it still complies with the regulations as a
     specific plant has -- let me put it this way -- not quite as good a
     desired risk status.
         MR. GILLESPIE:  Yes, that's --
         DR. KRESS:  And it is because your regulations have allowed
     that.  Now this allows you to focus on the problem of risk, which is
     really your regulatory mission, and you can say, well, have our rules
     done the job or not ,and which part of the rules are keeping us from
     doing our job and it will allow you to go back and say now do we need to
     change the rules some way and in what manner do we need to change them
     in order that we can get plants within a risk status that we would
     better desire.
         Now of course that raises questions of backfits or
     substantial improvements in regulatory analysis, but I mean you're stuck
     with that, but it will allow you to focus your attention on the rules
     and whether they are doing the job correctly or not, and I think it is
     worthwhile.
         I mean that is a public benefit that is going to come from
     it in my mind.
         MR. GILLESPIE:  I'm going to say it kind of harshly.  I
     don't mean it -- I don't know how else to say it, but if the industry
     disagrees with the basic premise that we have tried to pick thresholds
     that in fact they could fairly operate within and in normal
     circumstances at a reasonably well-run plant, not a perfect plant, it
     can operate within those thresholds.  If that basic premise is
     incorrect, the industry has to stand up and be counted and say that.
         DR. KRESS:  That may be the way -- your hands may be tied --
         MR. GILLESPIE:  I can only take on so much burden and this
     team in developing an approach that has to be somewhat generic, and it
     has to give the public the ability quite honestly to compare one plant
     to another, but in comparing one plant to another when you get
     structured like this, it's there and, yes, one plant could have, if it
     continually violated a threshold but was in compliance with all the
     rules, it could have a less desirable public image, and that is kind
     of -- you know, the comment on SALP today -- a similar comment.
         I hope it will be less of a problem than the system today
     gives us in that same arena but we will likely have not fixed it
     perfectly, so the industry now has to speak up and that is why this
     thing is out for comment.
         DR. BONACA:  Your point is a concern I had before when I
     talked about the fact that most units meet those kinds of quantitative
     criteria that we have here.  From past experience I can remember exact
     cases where units met those criteria that were provided by INPO and so
     on and so forth entirely, and yet their performance was very negative
     because of other issues which are tied to inspection.
         Now to the degree to which you have a much more scrutable
     process here, it's going to be very hard to deal with that kind of
     situation.
         I am trying to understand, you know, because I do believe
     that the inspections bring about a lot of insights that tell you whether
     or not a plant is doing well.
         What I am trying to say is that the numerics may force you
     to disregard a lot of these other insights from inspections if
     consistently they give you all greens.
         See what my concern is now?
         MR. GILLESPIE:  Yes, and this is the crux of a past
     criticism we have gotten from GAO in past years.  Let me take what you
     have just said, if I could, to the extreme, and that is when we go in
     and shut a plant down because all of a sudden -- Cook.
         We did an A&E inspection, raised questions about
     engineering, went in and looked at engineering even more.  They went in
     and looked at their engineering even more.  The ice columns became -- it
     just -- it kind of, you might say, expanded to the point where the
     facility has been shut down for an extended period now to get the design
     process straightened out.
         One of the things we were being pressed several years ago
     was to always identify a problem to allow a licensee to correct it
     before it got to the point where you had to shut down to correct it.  I
     believe -- and I am not going to commit to the team, unless they want to
     chime in, that these indicators plus the inspection -- and I can't take
     my eye that there's still substantial inspection being done in very
     focused areas, particularly engineering, will still give us an early
     enough indication that I think we can beat that goal.
         Now we looked back retrospectively and Cook was one of the
     ones that Bruce Mallett looked back on and said there is no indicator
     for this problem they had in engineering -- are we still looking at
     engineering?  And Bruce said yes, we are still looking at engineering,
     and we would have had an opportunity to pick that up.
         Cook still would have been shut down, because you would have
     found a problem in engineering, they would have pursued it more, the
     problem would have gotten bigger, and they'd be shut down.  The Cook
     shutdown by this new process likely would not have been avoided.
         DR. MILLER:  But it might have been shorter.
         MR. GILLESPIE:  It might have been shortened because we
     would have a different perspective on it possibly but I don't think the
     shutdown itself would have been avoided, so for Cook we wouldn't have
     made this goal that was set up for us.
         Inspection insights are still going to be there.  I hope
     they are the insights in the right area.  We have tried to focus them
     that way.
         I would like to bring up one other thing that is in the
     paper and no one here touched upon.  It's something that Mike is going
     to grimace when I say it, and it's the -- what do we call it? -- the
     executive exemption?  No --
         MR. JOHNSON:  Executive override.
         MR. GILLESPIE:  Executive override.  We are allowing a five
     percent executive override and I regret the terminology but I am pleased
     with the concept.
         In fact, if there are significant insights that cause us in
     a very limited number of cases to question the performance indicators,
     we do need to go in and examine those cases and if we find the
     indicators are correct, fine.  That reinforces where we're at.  If we
     find they were wrong, the executive override concept has great value in
     program evaluation space relevant to being kind of like a proof test.
         One of the things that we are going to be kind of committed
     to is kind of looking at the terminology and the basis we had in there
     for that.
         David Lochbaum, UCS, says I don't like these executive
     overrides at the Commission meeting -- there should be zero executive
     overrides -- because it had a negative connotation, but actually having
     an occasional question to test the system is probably a positive move,
     so we are going to need to go back and re-look at how we have termed
     that in there to take care of that concern, not everyplace but I think
     as a programmatic group in Headquarters we have to be conscious that we
     need to test our assumptions somehow, and the only way to do that is
     from the outside looking in.  You can't necessarily do it always from
     the inside looking out.
         DR. SEALE:  Are you familiar with the term "the halo
     effect" --
         MR. GILLESPIE:  Yes.
         DR. SEALE:  The halo effect was on the basis of
     personalities.  The executive override that you are talking about is on
     the basis of specific physical observables.  I think that is a
     reasonable trade, and not have the personalities which have been a
     problem in the past.
         MR. GILLESPIE:  So that is one aspect in there.  There was
     some discussion on that.  You know, if that is a piece -- you know --
     those are the kinds of pieces and kinds of comments that we're searching
     for that will help us hone the system down or in, get it in tune, and
     move forward with it.
         I think the concept was right, the move was right, it's the
     right thing to do, but we worded it wrong.
         DR. SEALE:  Yes.
         MR. GILLESPIE:  And that just comes as time evolves and you
     say, well, why did we say it that way?  Why do we really think this is
     the right thing to do?
         That is kind of an evolving nature, which is why we are not
     committed to rewriting this entire 400 pages, but now taking the
     comments, adding them to these 400 pages, and then moving forward and
     writing the actual program documentation.
         With that, I am worn out.
         DR. BARTON:  Well, one thing.  I know you said you didn't
     want to rewrite the document but there's areas -- the document could
     probably be streamlined and a lot thinner because there's a lot of
     redundancy.
         Also, I don't remember whether there is a master table of
     acronyms in there, but you use ERO in two different sections, Emergency
     Response and then Effluent something and Radiological Emergency.  You
     may want to either put a master acronym thing in here or else change one
     of the two because you use it in two separate --
         MR. GILLESPIE:  You can tell we had multiple authors putting
     this --
         DR. BARTON:  Yes, and that is probably the result and why it
     got that way, but you want to look at that so you don't end up with
     confusion down the road or at least put a master --
         MR. GILLESPIE:  Pat's comment was more than multiple.  In
     fact, there was about 24 authors who wrote different sections.
         DR. MILLER:  There is strength in the weakness.
         DR. BARTON:  Anything else we want to talk about next week
     or here that we haven't heard before?  If not, then --
         DR. SEALE:  I have an entirely different matter to ask the
     people.
         DR. BARTON:  While these guys are here?
         DR. SEALE:  No, no.  Just don't you guys leave, that's all I
     am saying, the members.
         DR. BARTON:  And we are done and we will break for lunch.
         Thank you for all the hard work and coming in here and
     putting it up again.  I guess we are adjourned.
         [Whereupon, at 12:31 p.m., the meeting was concluded.]