Protecting People and the EnvironmentUNITED STATES NUCLEAR REGULATORY COMMISSION
UNITED STATES OF AMERICA
NCLEAR REGULATORY COMMISSION
ADVISORY COMMITTEE ON REACTOR SAFEGUARDS
***
RELIABILITY AND PROBABILISTIC RISK ASSESSMENT
Nuclear Regulatory Commission
Room T-2B3
Two White Flint North
11545 Rockville Pike
Rockville, Maryland
Wednesday, June 28, 2000
The committee met, pursuant to notice, at 8:30
a.m.
MEMBERS PRESENT:
HOMAS KRESS, ACRS Member
JOHN D. SIEBER, ACRS Member
MARIO V. BONACA, ACRS Member
ROBERT E. UHRIG, ACR Member
WILLIAM J. SHACK, ACRS Member
PARTICIPANTS:
NOEL F. DUDLEY, ACRS Staff
HOWARD J. LARSON, ACRS/ACNW Staff
JOHN T. LARKINS, Executive Director, ACRS
MICHAEL T. MARKLEY, ACRS Staff
GERRY EISENBERG, ASME Project Team
SID BERNSEN, ASME Project Team
KARL FLEMING, ASME Project Team
RON SIMARD, Chair, ASME Project Team
RICK HILL, GE Nuclear Energy
BARRY SLOANE, Westinghouse
RAY SCHNEIDER, Westinghouse
MARY DROUIN, NRC, Office of Research
IAN WALL, Consultant, EPRI
BOB BUDNITZ, ASME Project Team (via speakerphone)
BRUCE MROWCA, Baltimore Gas & Electric. C O N T E N T S
ATTACHMENT PAGE
PROPOSED SCHEDULE 4
INTRODUCTORY STATEMENT 4
INTRODUCTION 6
MAJOR CHANGES FROM THE PREVIOUS DRAFT
IN RESPONSE TO PUBLIC COMMENTS 9
MATCHING PRA ELEMENT CAPABILITIES AND
APPLICATION CHARACTERISTICS 21
. P R O C E E D I N G S
[8:30 a.m.]
CHAIRMAN APOSTOLAKIS: The meeting will now come
to order. This is the first day of the meeting of the ACRS
Subcommittee on Reliability and Probabilistic Risk
Assessment. I am George Apostolakis, Chairman of the
Subcommittee.
ACRS members in attendance are Mario Bonaca,
Thomas Kress, William Shack, Jack Sieber and Robert Uhrig.
The purpose of this meeting is to discuss the
proposed final ASME standard for probably risk assessment
for nuclear power plant applications.
Tomorrow the subcommittee will discuss the status
of risk-informed revisions to 10 CFR Part 50, including
proposed revision to 10 CFR 50.44 concerning combustible gas
control systems, issues in the Nuclear Energy Institute
letter dated January 19, 2000, Option 3, and public comments
related to the advance notice of proposed rulemaking on 10
CFR 50.69 and Appendix T, Option 2.
The subcommittee will gather information, analyze
relevant issues and facts, and formulate proposed positions
and actions, as appropriate, for deliberation by the full
committee.
Michael T. Markley is the cognizant ACRS staff
engineer for this meeting.
The rules for participation in today's meeting
have been announced as part of the notice of this meeting
previously published in the Federal Register on May 16,
2000.
A transcript of the meeting is being kept and will
be made available, as stated in the Federal Register notice.
It is requested that speakers first identify themselves and
speak with sufficient clarity and volume so that they can be
readily heard.
We have received no written comments or requests
for time to make oral statements from members of the public
regarding today's meeting. However, Mr. Robert Christie of
Performance Technology, Incorporated, has requested time to
make a presentation during tomorrow's session concerning
proposed revision to 10 CFR 50.44.
We will now proceed with the meeting and I call
upon Mr. Gerry Eisenberg of ASME to begin.
MR. EISENBERG: Thank you. I am Gerry Eisenberg,
Director of Nuclear Codes and Standards at ASME, and I want
to thank the subcommittee for this opportunity to brief the
committee as well as to receive direct and early feedback on
our proposed ASME PRA standard. I think this feedback is a
very important part of our process.
With me at the table here, all the way to my
right, is Karl Fleming, a member of our Project Team and
Standards Committee; Sidney Bernsen, who is the Chairman of
our Standards Committee; and Ron Simard, who is Chairman of
our Project Team. Also, supporting Project Team members
Rick Hill, Barry Sloane, Ray Schneider and Ian Wall.
With that, I would like to turn it over to Dr.
Bernsen.
MR. BERNSEN: Good morning. My name is Sid
Bernsen. As Gerry said, I am Chair of the Committee on
Nuclear Risk Management, the Standards Committee that is
responsible for approving the standard and maintaining it.
We have a few visuals and they are also in a
handout. It was prepared for both the workshop that we held
yesterday and for this meeting today. I don't intend to
cover in detail all of the slides, but they are for your
information.
The first, just to review where we have been and
we are finally -- we are happy we are finally back here
again to talk to you. We are using the ASME redesign
process which involves using a project team of experts to
develop the document, publish it for early public and
comment, and then it will be approved by our committee,
which is a balanced committee without any dominance in any
sector, and the work is overseen by the Board of Codes &
Standards. And we intend for the standard to be recognized
as an American National Standard, we are going to submit it
to ANSI for approval.
The current status, historically, as you know, we
issued draft 10 for comment in the spring of '99. We
received 49 responses and well over 2,000 general and
specific comments and suggestions. This project team has
worked intensively to address the comments. I am not aware
of any effort in standards that involved as much as time
investment on the part of the people. The NRC, the industry
have all participated heavily in this thing. Project team
members have worked extremely hard to address the comments.
Our draft 12, which is the one in your handouts,
was issued for comment June 14th, and with it is included a
white paper that summarizes where we have been and where we
have come to.
Just briefly again, the scope and purpose of the
standard, it covers a Level I PRA analysis of internal
events at power, excluding fire, and a limited Level II,
sufficient for LERF evaluation. It is developed to support
risk-informed applications, including, of course, those
within the ASME Codes & Standards framework, the inservice
inspection, inservice testing, and others underway. And it
is developed to support the use of existing PRAs, which, as
we get into our discussion, is something to keep in mind.
It provides a process for determining the ability
of a PRA to support an application and it provides options
for augmenting the PRA either by adding to it or by
supplementary analysis to handle those cases where the PRA
has weaknesses and deficiencies.
Projected schedule, perhaps a bit optimistic, but
we are going to work toward it. August 14th, the comment
period ends. The project team will work to disposition the
comments and we hope by early October it will go to the
committee for approval, and that particular package will
include responses to the substantive comments. We will
probably go for a parallel public review at that time, the
formal public review.
Then the votes from the committee are due back in
a month. The team will work to resolve the comments. And
if we are successful, the whole package can go to ASME Board
of Codes & Standards for their concurrence before the end of
the year. And the ANSI process may take a month or so more.
The purpose of this review, and, as you know, we
held a workshop yesterday where we introduced this to a
number of members of the industry and public, is primarily
we want to make sure that we have resolved your specific
comments, your meaning in the case, obviously, ACRS sent a
lot of comments through the staff. And we have tried to
address them, but we really need the feedback from you on
how well we have done, the acceptability of other changes we
have made in response to other comments.
Recommendations for the future. This is a living
document. We are probably going to have to defer a number
of the comments and recommendations for future
consideration, so long as we come up with a standard now
that is adequately comprehensive and usable. And we hope
that the comments will be supported with a basis,
justification and proposed wording.
The only other thing I would like to mention is we
do have a number of representatives of the project team here
today. They are participating as individual experts. Their
comments don't necessarily represent the position of the
committee or ASME. Obviously, we haven't formally approved
the standard, and, therefore, we don't have an ASME position
on the standard, but I think you will hear from people who
are quite knowledgeable and, in a few cases, we may even
expose some still areas that need resolution, where there
are some differences of opinion and approach.
And we certainly welcome your interest, which we
know has been continuing, and the input that you have
provided to us. So, with that, I am going to turn the
meeting over to Ron Simard, who will discuss in more detail
the comments and what we have come to. Thank you.
MR. SIMARD: Good morning. I am Ron Simard. I
would like to acknowledge two more Project Team members who
have joined us since Sid made his introduction, Frank Rahn
and Mary Drouin.
Gerry, I would like to skip right to the slide
that summarizes the comments that we got on Rev. 10, because
what I would like to do is set the stage for Karl Fleming's
presentation and the more detailed discussion that I expect
we will get into about the approach we have taken in Rev.
12.
But let me try to help you understand what was
behind our rationale for the approach in Rev. 12. As Sid
said, we got a substantial number of written comments at the
end of the comment period on Rev. 12, and I am holding this
up. This is a two-sided copy. And in addition to the
comments that you see here that were submitted in writing,
we had discussions at a public workshop held shortly after
Rev. 10 was released, and at a number of key industry
meetings throughout the year. And what you see on this
viewgraph is my attempt to summarize what were the very
strong and clear messages that came through in all these
various discussions.
There was a very strong sense that Rev. 10, was it
was written, was too prescriptive, and it didn't allow the
flexibility needed to apply it to a variety of risk-informed
applications. One thing that we heard throughout the year
was that somebody had counted the number of "shall"
statements that were in the standard, and I am not going to
propagate that number by repeating it here, but there was
perceived to be a large number of "shalls."
Now, there were a number of concerns with that,
and I think one concern that really bothered us the most is
that they said the large number of shalls made it very
difficult to use with the process that we had laid out in
this standard for our risk-informed application. And the
related remark in the second bullet here is that we needed
to do more to allow users to distinguish among the grades of
application, given that there is, you know, a pretty broad
spectrum of applications that require different levels of
PRA capability. And again, another related comment in the
third bullet is that the applications that we are trying to
support today are applications that involve the wide mix of
PRAs. I think you all are familiar with the variety of PRAs
that are out there today.
And finally, there is a considerable amount of
work that has gone on in parallel with us developing this
standard to assess the quality of PRAs, and that is through
the industry certification process, which I understand you
are going to talk about tomorrow. But as we were working to
develop our standard, the guidelines for that process were
being developed, and visits were being carried out. I am
not quite sure where we stand today, but I have heard that
by the time -- well, certainly, by the time this standard is
out, most of the plants today will have had one of these
visits.
So, again, a very strong comment came through in
the written comments, in the workshop, in the discussions
throughout the year, that these visits were providing a lot
of good insights. And they also represented a significant
commitment of resources, and we needed, where possible, to
acknowledge that and allow a user to make use of any
insights from a previous peer review in the way that we
structured the peer review requirements in our standard.
And finally, although it is not on there, there
were also a number of comments that were favorable with
respect to Rev. 10. A number of commenters felt that,
despite the various comments that I just said about the lack
of flexibility and difficulty in applying the requirements
in Rev. 10 to the process, that, in fact, there was some
good stuff in there. There were some very -- some
characterizations of a PRA that really made sense and were
worth maintaining.
So, this is what we have tried to do in our
approach to Rev. 12. I won't get into too much detail in
the interest of time, knowing that Karl is going to cover
the approach that we have taken to recognizing different
categories of application and restructuring.
But I would like to point out a couple of other
differences that you will notice between Rev. 12 and Rev.
10. There is a fair amount of restructuring. For example,
we had what we would proposed as a mandatory appendix to
Rev. 10 of the standard, that had a database to be used for
generic data. And it was decided by the Committee on
Nuclear Risk Management that the standard is not the right
vehicle for that, but they have taken that on for
consideration in the future, whether or not it would be
appropriate for them to issue a separate standard on that.
Another thing that we have tried to do is we have
tried to emphasize that, really, the heart of this standard
is the process we have laid out for using the standard in a
risk-informed application. So, cosmetically, we have moved
that process from the bank of the standard now to the very
first thing that you see once you have read the definitions.
And second, we have tried to make that standard more usable.
The other thing that we have tried to do, again,
responding to those comments that we talked about earlier,
is we have tried to link the requirements for the various
aspects of a PRA in our standard to corresponding criteria
in that industry certification process where we could make
the linkage. So, where we could see that one of the
requirements in our standard was equivalent to a criterion
that was being used in the cert process, we explicitly
recognized that.
When Karl -- if Karl goes into the viewgraphs he
has got, for example, of one of the tables of requirements
in Rev. 12, you will see in the leftmost column, there is a
unique identifier for each requirement. And where we can
identify a corresponding criterion in the cert process,
there is also -- that number is there.
The other thing that you will notice is that where
we have retained a Rev. 10 requirement, we have also put in
the number of the subsection where that requirement appeared
in Rev. 10. Only for this review and only to assist you as
you compare what you are looking at today with what was in
Rev. 10. Those numbers will come out when it is published.
The only thing that I might do -- Gerry, would you
put up the last viewgraph, the flow chart, please?
This is something that Karl is not going into in
detail, and that we wouldn't expect to be -- I want to make
sure that we hit it now before we get into the way we have
structured the requirements. I want to emphasize the
importance of the process again.
This slide summarizes some of the main points of
the process as we have laid it out in Rev. 12. We
emphasize, for example, that the process is -- that the
requirements in the standard, for example, apply only to a
PRA that is going to be used in this process. So, the
requirements in Section 4 apply only to a PRA that is going
to be used in Section 3. They are not meant to describe a
PRA that is going to be used outside that context.
MR. EISENBERG: You might point out that it is the
last one. They are looking for it.
MR. SIMARD: In case you are having the trouble
finding the slide that is up there now, it should be the
last slide that is in the handout with my name on the front.
You got it?
The other thing is that in the second bullet, we
have added a statement to say that we -- the process
intended to be used with a PRA, that has had a peer review,
that meets the requirements of Section 6 of the standard.
A third point that I think came up again
yesterday, we had some useful feedback yesterday, I think
maybe we need to emphasize this a little bit more, is that
in the process we go through the various aspects of the PRA
requirement by requirement, as opposed to saying the entire
PRA has this level of capability. In certification
language, we don't say this is a Grade 2 PRA or a Grade 3
PRA.
And finally, it is only those aspects of a PRA
that you need for the application that you are considering
that would have to meet the capability level that we lay out
in our standard.
DR. BONACA: I would like to just make a comment
for the record, because we discussed this yesterday at the
workshop. I still have an issue or a concern with the
presumption that there is in Box A, that one can say this is
my problem and this, all I need to do is to develop this
primitive model and that is good enough, because, as I
mentioned yesterday, I have seen it hundreds of times and
that we use PRAs for so many years.
The PRAs always surprised us with findings about
dependencies that we did not understand when we were trying
to address a problem. PRAs always surprised the
specialists, they surprised the electrical engineers or the
mechanical engineers about things that they had not
imagined, and most of them were in the description of the
support systems.
And I am saying that I don't think it is a major
issue, however, I feel that the standard right now, it
doesn't provide any warning to this kind of issue, at least
in the forward where the distinction is being made in the
process. There have to be some forewarning that says that
changes proposed to be addressed with a Category I type of
capability should be very limited. I mean there is a
message somewhere here, but it is not very well
communicated. And this point of the importance of the
dependencies that cannot be intuitively understood up front
has to be presented. That is a judgment I have. And I
present yesterday, Karl. And, you know, anybody who has
used extensively PRA always gets these kind of findings and
surprises.
CHAIRMAN APOSTOLAKIS: So, how would you change
Box A?
DR. BONACA: I would not, maybe not change Box A,
but in the text where you have a description, in fact, of
how the steps are being done, there has to be a very clear
warning that there is always a risk in limiting your
projection of a model that you may miss something there.
MR. SIMARD: Thank you. That is useful comment.
DR. BONACA: And I can verbalize it and put it
down in writing and send it to you as a comment, I think.
CHAIRMAN APOSTOLAKIS: Yes. I think I would
appreciate it.
DR. KRESS: Would it be useful, Mario, to say --
say, you have identified your issue as a Category 1 type PRA
need, to use that Category I PRA in an iterative fashion to
verify that, sure enough, it was a Category I? Or is that
lifting yourself up by your bootstraps too much?
DR. BONACA: Well, I guess what I am trying to say
here is that if I had the Category I that I tailored to
address my issue, and then I did the same evaluation with a
Category III, for some changes, Category III will tell me
something different than Category I.
DR. KRESS: Tell you something too much different
than Category I.
CHAIRMAN APOSTOLAKIS: It is really Box C that you
comment is addressed --
DR. KRESS: Yes, determining.
CHAIRMAN APOSTOLAKIS: It determines the category.
That is where the warning should be.
DR. BONACA: It maybe ought to go there. Yeah.
CHAIRMAN APOSTOLAKIS: Determine the category of
application.
DR. BONACA: Okay.
CHAIRMAN APOSTOLAKIS: That is I, II, or III,
Roman I, II or III.
DR. BONACA: Okay.
CHAIRMAN APOSTOLAKIS: Mario is questioning
whether Category I is always sufficient, even when you think
it is.
DR. BONACA: Or that if you upfront can make a
decision.
DR. SHACK: I think it is really Box A he is
talking about, it is always Box 2, that you have somehow
identified the problem and you have limited it already
upfront.
CHAIRMAN APOSTOLAKIS: Then how about Box 5? That
is where you determine the category. 2 and 5 are related, I
suppose.
DR. BONACA: No, this is in the choice of the
specific requirements, the set of requirements they are
going through. The first assessment up there is how large
-- how well is my model supposed to be in order to address
this specific question.
CHAIRMAN APOSTOLAKIS: I think we are going to
have a discussion of the categories when Karl gets up there,
the appropriate slides. So let's say that we note the
comment.
I think Mr. Bernsen wanted to say something.
MR. BERNSEN: I was just going to say that perhaps
we do have something, I think it is in the quantification
area, where we say, when you are all done, you have got to
review this for reasonableness. And it may be that it would
be better to consider as an option, when you get done doing
the application, look back and see that you have had a
reasonable --
DR. BONACA: But if you look at the
quantification, I mean you have statements like, you know,
for Category I, you may want to check that the truncation
total does not exceed the CDF from the rest. I mean you may
want to do that.
MR. BERNSEN: That type of thing, right.
DR. BONACA: This is so loose that, you know,
there is not really a verification that you are making. You
know, if I go through those requirements on the
quantification, they don't give you any --
MR. BERNSEN: What I am saying, a similar thing at
the end of the application, when you have done it and you
have your results, then you need to sit back and look at it
and say, what have I done, is it reasonable?
DR. BONACA: I just, the last testimony, my main
concern is that if there is a presumption in this, and
people in good faith may think, and probably they are
thinking today, that they have very limited model and they
can do the world with it, because there is sufficient
description But the fact is I can tell -- I mean anybody
who uses the PRA, how many times the PRA provides surprises
to the deterministic people, because it provides
dependencies that they don't understand upfront, so.
MR. SIMARD: Well, I think at this point what I
will do is, I think at this point I will end and let Karl
start walking us through the way the requirements are
structured in more detail. And then that will help to give
us specific examples before us that we can talk about.
CHAIRMAN APOSTOLAKIS: Good idea.
MR. SIMARD: I will just note one thing, if we
have any comments in particular about the Level II LERF
analysis, all the sections, the nine elements of the PRA
that we describe in our standard were assigned to various
members of the team with one team member as the lead. In
the case of the Level II LERF analysis, the team lead was
Ray Schneider, who, unfortunately, has a conflict and will
have to leave here around 10:00. So, if we have additional
comments beyond that, I think other team members can help,
but it might be good, if there is anything really
substantive, to try to involve Ray if we can.
CHAIRMAN APOSTOLAKIS: Perhaps after Mr. Fleming
gives us an overall view of the methodology, we can jump
into LERF and make sure that the comments are covered. Now,
who is this Mr. Fleming?
MR. FLEMING: My name is Karl Fleming and I am --
CHAIRMAN APOSTOLAKIS: The committee is not
familiar with you.
MR. FLEMING: I am a member of the Project Team
working on the standard.
CHAIRMAN APOSTOLAKIS: Okay.
MR. FLEMING: I would like to begin my
presentation with a few comments on Mario's concern, because
I think it is a valid concern. But there are a couple of
comments I want to make that could perhaps mitigate the
impact of your comment.
On one, I think one part of your comment
indicates, and I agree with this wholeheartedly, there is a
critical mass for a PRA, that before we can even put the PRA
label on something called a PRA, it has to meet some minimum
qualifications. And it is certainly our intention that the
Category I requirements capture that, and if there are some
specific problems or limitations with our requirements that
don't get us to that critical mass, we certainly are anxious
to get that feedback.
But another reflection I want to make is Ron
indicated there has been, you know, more than half of the
plant PSAs have been subjected to this industry
certification peer review process. I have participated on
about 10 of them myself. And I don't think there is a
Category I -- I mean I doubt, I haven't seen all of them,
but, based on my evidence, I would doubt if there is a full
Category I PRA out there.
I think every PRA out there has many elements that
would classify as Category II and some Category III. And I
think the concept of the block diagram that we have shown
earlier is to try to clarify that a given PRA may have an
outstanding accident sequence model for transients and
LOCAs, but may be very weak for ATWS or very weak for
station blackout. So, there may be specific areas of the
PSA that are Category I or maybe not even Category I, but
other aspects of their PRA and systems and data treatment
that may be very good.
So the block diagram is meant to clarify that, for
some applications, what -- the current PSA, with its
weakness and strengths, could be adequate for a given
application, and to advance the concept that perhaps one can
use the PSA today and incrementally, you know, build on its
capabilities without having to invest huge resources to
bring the whole PRA up to some level before they can begin
to apply it.
CHAIRMAN APOSTOLAKIS: Speaking of resources,
Karl, you are very experienced with these things, given that
most units have an IPE now, what do you think the cost would
be, roughly, for the utilities to upgrade those to a good
Level II PRA and then a good Level III PRA? What are these
huge resources we are talking about all the time?
Is it $10 million or half a million dollars?
MR. FLEMING: I would say that if there is an
example of a PSA that went to the minimum, might be
requirements and not much further, and did not update it and
so forth and needed to do risk-informed applications, I
would say that the typical cost upgrade, if they just sort
of purchased the services from a consultant company, may be
one million dollars to update the Level I PSA, and perhaps
half a million for the Level II.
CHAIRMAN APOSTOLAKIS: So with a million and a
half, they would have a very good LEVEL II PRA?
MR. FLEMING: Right.
CHAIRMAN APOSTOLAKIS: And the would not need to
agonize over Category I, II, III, and all these things?
MR. FLEMING: Right.
CHAIRMAN APOSTOLAKIS: So I'm a little puzzled
here. Where are these limitations in resources and so on?
It seems to me a million and a half, considering the
benefits that the utilities will have from the PRAs, is
nothing.
And yet we hear all the time that there are
limited resources, that we have to develop standards that
recognize that you don't need a good PRA for all
applications, and debate it.
You know, we spend a million and a half debating
when you need a PRA, instead of spending it doing a good
one.
Now, Mr. Sieber, I think, has something to say.
MR. SIEBER: Well, my comment is that in the
context of budgeting for a nuclear power plant, a million
dollars is something. And it takes at least two people,
full-time, to keep the PRA up, and that adds to your
employment list.
And so it's not inconsequential.
CHAIRMAN APOSTOLAKIS: It is no inconsequential,
but I think we're spending that much money arguing about
quality and arguing about -- instead of just doing it. Of
course, this has nothing to do with the ASME standard which
is facing reality, of course.
But I was just wondering why we have all these
things. But anyway, you answered my question.
MR. FLEMING: I think that whatever the resources
are and whoever wants to decide to allocate those resources,
it's also a legitimate consideration to optimally allocate
those resources so that you're adding the resources in the
parts of the PSA that you need to apply today, so don't
necessarily have to go out and put a bit chunk of resources
in at once.
CHAIRMAN APOSTOLAKIS: It seems to me that the
Revised Oversight Process has sent a clear message that this
Agency is serious about risk-informing the regulations.
MR. FLEMING: Right.
CHAIRMAN APOSTOLAKIS: So whether the utilities
want to spend a million dollars now, or drag their feet and
spend it three years from now, I think it's coming.
MR. FLEMING: Right.
CHAIRMAN APOSTOLAKIS: And the Staff was
authorized recently to request risk information from the
licensees, even if they choose not to submit risk
information. So now you tell me what those signs are.
MR. SIEBER: Right.
DR. BONACA: I would like to add just one more
thing, Mr. Chairman, which is --
CHAIRMAN APOSTOLAKIS: George, for you.
DR. BONACA: I totally agree with your statement
that I don't know of any PRA out there that is just a
Category I. But also, I would like to remake the statement
that I made yesterday that if there was one Category I PRA,
it would be a dog. I mean, I would be really something that
you would not want to use for anything.
And then we have a standard, however, that would
allow for a PRA to be that poor, because it doesn't say here
that only some aspects should be Category I, and others
shouldn't be.
CHAIRMAN APOSTOLAKIS: If you remember yesterday,
we discussed something that I believe the gentleman from
ASME agreed that if you think in terms of Regulatory Guide
1.174, and you remember the almost white lower left-hand
side corner, then as you move towards the boundary, it
becomes darker and darker.
Category II, I believe we said, really would apply
to the nearly-white area. As you moved to the darker areas,
then you enter Category III, which I think is a very good
description of the categories. I mean, that really
cleared it up for me.
And the other important thing, Mario, which is
relevant to your question, is that there is no room for
Category I there. In the context of 1.174, I don't see a
Category I playing a role.
DR. BONACA: You're right. I'm not saying that.
I'm only saying that when I look at the standard, I have
always had an expectation that standards are the standards,
which is, you know, I have something I can look up to, and I
know I can do something with that standard.
And so, I'm a little bit troubled by -- and I
recognize, totally, the point that you are making, about
only looking at certain attributes.
But, taken in a vacuum, you could think about, you
know, these are Category I and what could I do with that?
And the issue -- and the answer is, you can do much with it,
which is a very poor --
CHAIRMAN APOSTOLAKIS: Karl, you remember many
years ago, in order to streamline the PRA, there was a Phase
I where people used rough point estimates, looking at other
similar PRAs, PRAs for similar plants.
And they came up with a list of dominant sequences
before they started doing a more detailed analysis.
Now, as I recall, that list was pretty good. I
mean, a detailed analysis did not really upset the order
that you got.
Would you say that that kind of a crude ranking
would be a Category I application?
MR. FLEMING: Well, I think that it would. I
think the main difference might be some of the documentation
requirements.
Those limited-scope, Phase I PRAs that you're
talking about, their primary purpose was to optimize the
resources for the full PRA.
CHAIRMAN APOSTOLAKIS: Right.
MR. FLEMING: And I don't recall many important
decisions being made.
CHAIRMAN APOSTOLAKIS: No, no.
MR. FLEMING: On the basis of that.
CHAIRMAN APOSTOLAKIS: I agree.
MR. FLEMING: It was a way to risk-inform the PRA
itself.
CHAIRMAN APOSTOLAKIS: But the results, though,
were fairly robust.
MR. FLEMING: Yes, if experienced people are doing
the PRA, they are capable of coming up with dominant
sequences very quickly, with maybe ten percent of the
resources of the PRA.
DR. KRESS: George, with respect to your
categorization, linked to the white to dark, the problem I
have with that is that white-to-dark space has -- is in a
plane at which you have to have the absolute value of the
CDF already, and the absolute value of the LERF.
That means you have to have a Category III to
determine those numbers, before you even enter into that
space.
CHAIRMAN APOSTOLAKIS: If you want to be a purist,
that's correct.
DR. KRESS: Yes, well, I am a purist.
CHAIRMAN APOSTOLAKIS: There are situations, I
think, where you have an idea that you are really way down
there.
Some of the newer plants are highly redundant,
they produce numbers like ten to the minus six. Now, you
might say that if you don't have a complete PRA, that number
could be as high as ten to the minus five.
But you're still --
DR. KRESS: You still could estimate.
CHAIRMAN APOSTOLAKIS: -- in that region, so I
mean, at least trying to tie it to the decisionmaking
process, helps, I think. But, again, you can never draw a
line and say Category II to the left and III to the right.
We have a request?
MR. SCHNEIDER: Yes, Ray Schneider, Westinghouse.
One of the issues with the Category I is that if
you view that some PSAs will have Category I elements where
they were intended to be conservative in the modeling, tried
to give a higher estimate of CDF, which tried to give a
higher estimate of LERF, and in those cases, you can make
decisions within certain regions, as long as you're making
the decision based on a well-focused assessment, and you
understand what the limitations of the PSA is, and that you
understand the uncertainty bounds are with respect to the
uncertainties.
And so if you are on the high end, you can make
reasonable assumptions, so that while we're not -- the
standard isn't purporting to say you should have -- anyone
should have a Category I PSA, but Category I PSAs could have
Category I elements within them that -- where certain
assessments can be made, and made quite effectively and
quite robustly.
DR. BONACA: I understand.
MR. RAHN: Mr. Chairman?
CHAIRMAN APOSTOLAKIS: Yes.
MR. RAHN: Frank Rahn, a member of the Project
Team.
I know it's hard to believe, but there are
potentially some ramifications that are non-regulatory in
nature, where we don't even need to, for instance, consider
a Category II, but where a Category I may be well sufficient
to make a decision.
Again, the purpose of a PRA is a guide to your
thinking, and there are applications, as example, making
insurance decisions, which are based on some insights in PRA
where we've used this, things like trip meters which may be
economic decisions.
So the ASME is not only serving, if you will, the
regulatory applications, but a whole spectrum of other
applications where a Category I application may be
sufficient.
MR. SIMARD: Karl, are you done?
CHAIRMAN APOSTOLAKIS: No, we are discussing
Categories without -- I suggest going directly to the
categories.
MR. EISENBERG: Show me which one.
MR. FLEMING: The next one, actually.
CHAIRMAN APOSTOLAKIS: Either the second or the
third.
MR. FLEMING: In the effort that we went through
to prepare this draft, we were attempting to meet several
objectives, one of which was to retain the technical
resources that had been set forth in Draft 10, and also to
try to match up the requirements to the certification
process.
We spent quite a bit of time in the last six
months, working on the definition of the application
categories, because the detailed supporting requirements are
all specified in terms of three different application
categories.
We came up with three categories that match the
top three categories of the industry's peer review and
certification process.
We go in there, recognizing that a given PRA will
have to be examined for their capabilities with respect to
the details of the PRA. Individual elements and individual
parts of the PRA within an element may fall into different
categories, and with that recognition, we'd like to be able
to provide a set of tools for the utility to use, so that
they can find the appropriate applications to support the
requirements.
We might move to the next slide, please. A little
bit on the definition: I think George's descriptions were
provided some good insights.
The Category I applications, we define in terms of
decisions that are normally made based on deterministic
analyses. And if you had a PRA, you could supplement those
deterministic insights with PRA insights.
But these are applications that refer to actions
that the utility has to do anyway, with or without a PRA,
and with the availability of the PRA resources, can provide
additional insights.
Category II was intended to line up with risk-
informed applications, the minimum applications that might
be required to support a risk-informed application in which
you need a balanced set of PRA insights and deterministic
analyses.
Category III applications get up into the area
where in Reg 1.174, you need to increase management
attention, where the decision more heavily hinges on the
validity and absolute values of the PSA.
DR. KRESS: Let me ask another question: I like
to think in terms of uncertainty. And it seems to me like
you could link each category to the degree to which you need
to know the uncertainty.
For example, Category I looks to me like you need
to know the uncertainty, because the application is of such
a nature that you cover it otherwise with the deterministic
analysis.
Category II, you probably need to know something
about the uncertainty, but you can probably do it with a
sensitivity-type analysis.
MR. FLEMING: Right.
DR. KRESS: Category III looks to me like it needs
a full uncertainty analysis. Is that a good way to look at
these?
MR. FLEMING: Yes. There are several different
attributes of the PRA that we have looked at across these
application categories.
And uncertainty is one of those. In fact, we'll
go on to Slide Number 4 where we identified the
differentiation across these categories with respect to the
expectations for uncertainties.
In Category I, there certainly is a need to
appreciate the sources of uncertainty and the general
concepts of uncertainty that are behind the PSA results.
In the Category II, there is an expectation that
you can understand uncertainties well enough to be able to
identify your CDF and LERF estimates with mean values.
That means you have to think adequately through
your uncertainties to be able to say that the point
estimates you're calculating are reasonable estimates of the
mean value.
And then finally, in Category III, a full
quantification of the epistemic and alliatory uncertainties
is expected, which is consistent with Reg Guide 1.174
expectations.
DR. KRESS: I should have looked ahead to see your
slide.
MR. FLEMING: Yes. So that is one of the
dimensions. Another dimension is the extent to which the
decisions may impact the licensing basis with respect to
safety-related systems, structures, and components.
And as Frank Rahn mentioned, there may be
applications in which the utility might want to make some
changes to the balance of plant to reduce the -- to improve
the reliability of the plant, in which case, it does not
have to apply to the NRC for these types of decisions, and
may have somewhat less requirements to document the PSA so
that a regulatory body can participate in the peer review
process.
CHAIRMAN APOSTOLAKIS: Now, again, this is
something that came up yesterday. When we discuss these
things, I think it's important to always bear in mind what
the purpose of this is in the standard.
In other words, I don't think anyone will come to
the NRC and say, well, this is a Category II issue, and
that's why I did it this way; don't ask me any questions.
The staff will say, well, excuse me, but here are
100 questions. So that's not the intended use.
The intended use is before they come here, to
think about the issues. What would be required? So there
is a contribution to the general, I would say, elevation of
the state of the art to a certain level.
So the licensee will know in advance, what kinds
of things are really expected of the PRA. So when they come
here, they will be prepared.
So in that sense, I'm fairly comfortable with
this, because it recognizes, you know, reality.
I mean, we can argue about the words and put in
1.174 references and so on, but I -- but if the intent was,
I mean, to have somebody come and say, gee, the standard
says Category II, and you are asking questions about
Category III, well, excuse me, then I'm against it.
But the Staff will always be free to ask the
questions that they feel are appropriate to ask.
So, that's fine with me. If the licensee wants to
think that it's Category I and come here and be surprised,
well, that's one more surprise for Mario here.
PRAs surprise people in a lot of ways. So, I'm
happy with the -- I mean, not the details, but the whole
idea.
DR. BONACA: I didn't not express an opposition to
the way that the standard is being -- I believe, however,
that there is need for -- I think, in the text, you know,
the presentation, I think, is clearer than the text.
There is a need to translate some of this into the
text, so there is a clearer understanding of the limitations
of PRA Category I, and, therefore, you don't stray from this
approach.
CHAIRMAN APOSTOLAKIS: In the context of what I
just said, of helping the licensee understand what is
happening here, so that he won't be surprised before the
Staff, that would be a very valuable thing to do.
DR. BONACA: Yes.
MR. FLEMING: The other thing that came out in our
presentation yesterday, and I think we got some feedback
that we could improve our presentation of this in the text.
And that is that there is also an expectation that
in terms of the scope of coverage of these requirements, in
terms of the dominant and risk-significant accident
sequences.
And in this slide we bring out the expectation
that for Category I applications, we have a set of
requirements, and we expect those requirements would capture
the critical mass issues before we could put the label of
PRA on the product.
But we impose the requirements on the treatment of
the dominant sequences. And so, for example, there may be
some requirements that have to be applied to the dominant
sequences that are not important for the non-dominant parts
of the accident sequences.
When we go into Categories II and III, we have to
extend the application of these supporting requirements to
all the risk-significant sequences. And if we go up into
this area of increased management scrutiny, we may have to
go beyond the risk-significant sequences to some of the even
less important sequences, to the extent that that may impact
the decision.
So, that's another characteristic of these
requirements, and one of the feedback discussions we had
yesterday is that we probably need to work on a definition
of what we're talking about when we use these terms,
dominant, and risk-significance.
We did not include those in the actual definitions
section, and I think we got some feedback that we would be
well advised to add that.
If we can skip the next slide, so, working sort of
from a top-down fashion, we, of course, then have the
elements, the nine elements of the PSA, and these are the
same nine elements that we used in Draft 10.
And they are very typical of what you would see in
the breakdown: Initiating Events, Sequence Development,
Systems Analysis, Data Analysis, and so forth.
There are nine of these that cover the scope for
internal events, including internal flooding, but not
including internal fires.
If you look at these attributes, the attributes
call out the concepts of dominant versus risk-significant
accident sequences, and the other concept that's clearly
differentiated across these three columns is that
conservatism is tolerated, if you will, more completely --
more freely in the Category I applications, whereas it's not
really tolerated in the risk-significant arena for the
Category II and III applications.
And anytime that we permit or provide the
opportunity to meet requirements with conservative
assumptions, we have the caveat that the conservatisms do
not distort the ability to make risk screening applications
that you would need in a Category I.
The basic Category I type of applications are
applications in which you just want to make course screening
of elements of your PSA into very course risk categories, so
that conservatism would be permitted, only to the extent
that it does not distort that kind of application.
So that's -- these attributes provide the logic
for how we tried to come up with a differentiation, when
appropriate, for the supporting requirements for each of the
categories.
CHAIRMAN APOSTOLAKIS: Under Data Analysis, it
says realistic quantification of mean values.
MR. FLEMING: Right.
CHAIRMAN APOSTOLAKIS: Many PRA type analysis -- a
lot of people take a point estimate, and they say, well,
this is a mean value.
That's not what you mean here. You have to alert
people to the fact that the mean is not the same as
somebody's best estimate.
MR. FLEMING: That's right. The concept for data
and quantification is that point estimates, which could be
conservative estimates, as long as they don't distort the
risk profile, are accepted for Category I.
Mean point estimates are expected for Category II,
and that means that you have to carry through your
uncertainty analysis to a sufficient extent to be able to
show, demonstrate that you have mean values.
CHAIRMAN APOSTOLAKIS: That's stated somewhere?
It should be clarified.
MR. FLEMING: We certainly intended it to be.
CHAIRMAN APOSTOLAKIS: Yes. I don't remember
seeing that.
DR. KRESS: That's the mean value of only the
alliatory uncertainty?
MR. FLEMING: Alliatory and --
CHAIRMAN APOSTOLAKIS: Epistemic uncertainties, as
well. I think it is really epistemic.
DR. KRESS: Yes. That's only the --
CHAIRMAN APOSTOLAKIS: The failure rate is
epistemic.
DR. KRESS: Yes.
MR. FLEMING: Whatever epistemic uncertainties
that are included in the model.
CHAIRMAN APOSTOLAKIS: Yes, the human error rates.
MR. FLEMING: Yes, the human error rates, and --
CHAIRMAN APOSTOLAKIS: So that's a key point, and
I think that maybe we can look for a place to make sure --
DR. KRESS: Both of those things need
clarification.
MR. FLEMING: For example, that would require some
kind of uncertainty analysis be done at the data level, but
not necessarily propagated all the way through to CDF and
LERF.
CHAIRMAN APOSTOLAKIS: But when you propagate mean
values, in some instances, as you know, the variance plays a
role.
MR. FLEMING: Right.
CHAIRMAN APOSTOLAKIS: I think people will find it
easier to just do a Monte Carlo simulation. That is at
least numerical, you know. Just do it.
MR. FLEMING: And that may be, in fact, the case.
CHAIRMAN APOSTOLAKIS: I wonder whether we can,
before Karl moves on to discussing requirements, maybe we
can start with the LERF, the very last one, Level II
analysis, and make sure we cover it before the expert
leaves? How about that? Is that okay?
MR. FLEMING: Sure.
CHAIRMAN APOSTOLAKIS: Unless there is something -
-
MR. FLEMING: Sure.
CHAIRMAN APOSTOLAKIS: Do you have any viewgraphs
on this subject?
MR. SCHNEIDER: No. I was just basically going to
take questions from the Committee, but I would want to put
something in overview in terms of what was done with the
LERF section.
The intent was not to be a full Level II PSA, but
to look at the LERF surrogate that the NRC's been using for
regulatory review. And the three categories in --
The words probably don't specifically state the
way it was structured, but the three categories, Category I,
was generally intended to be the conservative estimate of
LERF, using bounding assumptions, where bounding assumptions
would be -- would provide acceptable results in sufficient
margin.
As you move it to the Categories, you will get
increased resolution and increased precision. You include
more information, more phenomena, and more information on
the -- more details on the quantification.
So as you go from Category I, II, and III, what
you should be getting is a more refined prediction of LERF,
generally moving down.
The expectation is that Category I estimates
should not under-predict LERF.
Okay, I guess with that as an overview, I'll take
questions.
DR. KRESS: Well, I had a couple of questions,
mostly -- I thought that was a fairly good section of the
standards, but I had some questions that I think are mostly
just of a clarification nature.
On page 126 of my version of the document, in the
Category III applications, you say in the bottom box there,
you say you include a requirement that the effects of in-
vessel melt retention ought to be included.
And I wonder why you felt it necessary to actually
spell that out. Is that at all applicable to any operating
plants we have?
MR. SCHNEIDER: There are several C-plants that
have the ability to be bottom-flooded, and some of them have
credited a certain proportion in their detail Level IIs,
more or less a certain proportion of the events wouldn't
necessarily go to failure, because they could flood all the
way up to the nozzles.
So, there's a -- we have integral lower heads, so,
as a result of the design differences, at least for the
plants that I'm familiar with, it is a consideration that
has shown up in PSAs, and could result in certain events
that would have gone into failure/not going to failure.
DR. KRESS: My understanding is that they all go
to failure if you include the uncertainties, and the size of
the vessels and the power levels are such that none of them
really can take much credit for in-vessel retention.
I would rethink whether or not I wanted to have
that called out, specifically, there. But maybe I'm wrong
there.
MR. SCHNEIDER: I believe that most of them can,
and the analyses depend on -- there is, I guess, the
probabilistic assessment that a certain fraction of them,
under -- I guess there were two issues.
One was a delayed injection into the RCS, coupled
with a flooding of the external would give you a high
probability of recovery.
And that wouldn't be recovery in another high-
pressure event or another event that progressed slightly
differently. That's why it says level of precision that's
moving.
What's happening is that you're reducing your LERF
probability.
DR. KRESS: My feelings are that the uncertainties
are so large in that that it probably is not useful.
MR. SCHNEIDER: Understood.
DR. KRESS: Likewise, on page 128, in defining a
large early release it says that the analyst may consider
mitigating factors, such as played out and deposition of
fission products released from the fuel and the release
pathway characteristics.
I certainly again with that but unfortunately
nowhere in the standards do you mention any standards for
fission product release modeling that I can see at all
because you are dealing mostly with LERF, which doesn't
really involve the modelling of fission product release, but
if you are going to take credit for mitigating issues, then
you have to know something about the timing of the release.
You have to know something about the species. You have to
know something about the aerosol characteristics which
depend on those things, so when you say they may take credit
for it, you don't go the next step and say but if you do you
will have to meet certain standards in your fission product
release model, so this is more just a comment than a
question.
MR. SCHNEIDER: Good point.
DR. KRESS: And I guess I had one other. This is
a clarification question.
On Table 4.49 on the dominant contributors to be
considered in LERF, I was a little bit interested in why
under hydrogen combustion for example you included Mark 1s
and Mark 2s, which I thought were inerted and why you didn't
include large dries because in combination with other loads
hydrogen combustion could be the straw that puts you over
the brink so I was just wondering why the check marks, how
the check marks came about in that table?
MR. SCHNEIDER: For the large dries, the Level II
analyses and the experiments that have been followed up,
that have been used to support this is that the DCH and
hydrogen combustion really aren't concurrent. They do occur
displaced in time and while if you add the two together
would put you above the brink, they probability that they
will be there as a dominant contributor hasn't shown out to
be in practice, in experiments.
I think that is why we didn't put the check box
there for that and why we kept it but we did keep it for the
DCH induced failure with a certain probability, but when you
start adding the DCD in hydrogen the probability would be a
lot lower.
Also, a lot of the analyses that are actually
being done often when they do the DCH add to it, consider
the hydrogen combustion in conjunction with the DCH as well,
so this is really for the hydrogen combustion independent of
the high pressure melt ejection.
DR. KRESS: Okay, and they detect the Mark 1s and
Mark 2?
MR. SCHNEIDER: I'll turn that over to Rick Hill.
MR. HILL: This is Rick Hill, GE, and a member of
the project team.
Hydrogen combustion is listed for Mark 1, Mark 2
even though they are inerted plants. They are oxygen
controlled plants and there are scenarios where you could
de-inert of have oxygen in the containment and we feel that
that is a question of Level II modeling that should take
place even though obviously the risks are very low.
DR. KRESS: Okay, it wasn't screened out on low
probability?
MR. HILL: Right.
DR. KRESS: Okay. A similar question on this
table. Why did you feel like you could exclude steam
explosions from consideration in large dries and ice
condensers?
MR. SCHNEIDER: It goes pretty much back to what
the existing Level IIs tend to show is that the steam
explosion phenomenon is, once officially uncertain and low
probability, that for a LERF assessment it just was over-
dominated by all the other processes.
The main issues in terms of releases to the public
pragmatically are where you have the loss of containment
isolations above ground typically and it would be loss of
containment isolation, the IS LOCA and the steam generator
tube rupture.
To a much lower extent you have the probabilistic
potential that you can fail containment due to the high
pressure.
The steam explosions typically occur in the lower
portions of the cavity. You would have to fail the
containment in a way that would affect the above-ground
releases and it was just felt to be a much lower probability
event that would be more than covered by the others as long
as you are not doing a detailed Level II.
DR. KRESS: So you are relying on the risk
insights --
MR. SCHNEIDER: From the Level IIs that were
done --
DR. KRESS: -- from the Level IIs that were done
by the IPEs.
I suspect that that might be a risky thing to rely
on for this. I am not sure I would want to exclude steam
explosions, at least I don't think the explosion itself is
going to damage the containment.
We are dealing with containment here --
MR. SCHNEIDER: Right.
DR. KRESS: -- but I think there is a high
probability it can add pressure to an already pressurized
containment and might ought to be considered for looking at.
MR. SCHNEIDER: Well, we have looked at that issue
and that is not the driver.
You can vaporize a lot of the water but the
robustness of the containments are such that you are not
going to, pragmatically you are not going to have enough
water in the containment to take that to a containment
failure condition, but we could look at that and reconsider
and check the numbers out.
DR. KRESS: Okay. Well, that is the extent of the
questions I had. Do you have some?
CHAIRMAN APOSTOLAKIS: I have one or two, but
maybe it is because of my ignorance of the subject.
I have always been mystified by the definition of
large early release, so I was looking for a definition.
So on page 8 it says that large early release is
the rapid, unscrubbed release of airborne fission products
from the containment to the environment occurring before the
effective implementation of offsite emergency response and
protective actions.
Then on page 128 it says you define LERF
consistent with the definition given on page 8, Section 2,
but then it goes on and elaborates a little bit on early --
which means, early refers to a timeframe -- prior to
effective evacuation of the inhabitants of the exclusionary
area boundary.
My question is why are we avoiding giving a time,
a rough time? I have heard in the past before three hours,
but I don't know. Is it before any effective implementation
of offsite emergency response? I wonder if that is a
scientific definition -- early.
What if the emergency response measures fail and
they are delayed? Well, then early release is anything that
is released before that? There has to be some time --
MR. SCHNEIDER: For most of the transients if you
look at what contributes to LERF, it mostly isn't an issue.
It comes because like if you have loss of containment
isolation it is core damage events that occur and have an
early core damage failure.
CHAIRMAN APOSTOLAKIS: Early?
MR. SCHNEIDER: Yes, and so you are generally
talking the first several hours, so when you initially get
to this, you are dealing with 4 to 8 and then that depends
on how quickly they can get the information out to the
public, how much population they have around the site.
For example, in Arizona, it's not going to be that
bad. They know everyone's phone number, but in other areas
it may take longer for evacuation so to but a rule on the
time was -- we didn't feel comfortable doing an exact,
precise time, but the issues that you have to consider are
about how rapidly is the staff going to be able to recognize
they are undergoing a core damage state, how quickly can the
information get out, and when do they expect the releases to
be felt given the event? For example, steam generator
ruptures may occur very late in time.
What this does is gives them the flexibility to
say not all steam generator tube ruptures have to be
considered large early if you can keep the core covered for
12 to 15 to 18 hours, but if you have a steam generator tube
rupture that rapidly progresses to failure with an open
MSSV, then that would be an early release, so --
CHAIRMAN APOSTOLAKIS: I guess my question is why
is the condition of early or late, why does it depend on the
evacuation and not on some physical characteristics of the
accident?
MR. SCHNEIDER: The QHO was the original --
CHAIRMAN APOSTOLAKIS: The unscrubbed --
MR. SCHNEIDER: Because you go back to the
original definition of what was trying to be accomplished
maybe five to eight years ago when you had the Qualitative
Health Objective, and that was basically to limit the number
of fatalities, to put it into a certain level -- to make it
consistent with the rest of the industry, and what they did
is they made a surrogate and the surrogate was LERF.
So you have taken away now a lot of the features
that went into what the QHO was but the QHO included
evacuation, sheltering and all of those features so the LERF
retained some of that flexibility without the clear
definition of how it affects the population and so you need
to define something and if you put a defining time for
certain plants that may not be an appropriate timeframe, so
we're allowing them the flexibility to adjudge their
emergency planning procedures that match up against the
various events and then determine whether they would class a
specific steam generator tube rupture as a large, early or a
delayed.
Otherwise you may end up in situations where you
have later releases that because of some issue associated
with the transient that may have prevented them from
alerting the public might really be classified as a large
but if we put -- an early, but if we put a short timeframe
involved that would just automatically throw it out and if
by the same token if you put a long timeframe you probably
are including too many events, especially for the plants
with low population area.
So we did allow some flexibility. It mainly will
affect issues like steam generator tube rupture and some of
the high pressure melt events. They have to justify how
they are binning it.
DR. KRESS: I think when NRC, and I may be
interpreting them incorrectly, but when they went to the
LERF what their intention was to do was to more or less
separate outside issues from design issues and do it in such
a way that the LERF would cover essentially most of the
sites.
CHAIRMAN APOSTOLAKIS: Right.
DR. KRESS: And now we seem to be going away from
that and going back and saying now we have to -- if you are
going to do a LERF that is site specific, you have got to
have Level III PRA, which we are not dealing with in here at
all. We have no standards for Level III. We don't
discussion fission product standards and I think it is a
mistake to in this particular standard to go wawy from NRC's
intended use where the LERF that we have is related to plant
accident issues, like George says.
I think you do need some sort of tighter
definition of large and early release that relates to
actually the timing of the accident that would be site-
independent, frankly, and that is a problem I had with it
too.
CHAIRMAN APOSTOLAKIS: The definition seems to
depend on site characteristics in emergency --
DR. KRESS: Yes, but those fall into Level III
categories and you have no standards for Level III, so I
think you have a bit of a problem with that.
CHAIRMAN APOSTOLAKIS: Please.
MR. FLEMING: There was also some industry
perspective on the definition of LERF that we put into the
EPRI PSA Applications Guide, and we offered a definition in
the PSA Applications Guide which is consistent with this
definition but it was a little bit different.
The philosophy from the industry perspective was
to expand the range of risk informed applications to be able
to consider some of the containment systems that might be
involved in the applications, and we came up with a
definition of LERF in the EPRI PSA Applications Guide which
was based on the philosophy of capturing all the risk of
early health effects and we used the definition that was
based on the assumption that Seabrook, the Seabrook Level
III PSA, which had a vast inventory of Level III analyses,
and also the Staff had pretty much concluded that Seabrook
had one of the more limiting sites with respect to the
emergency plan, we came up with a definition in the
Applications Guide, which I believe was earlier was
something like within four hours of vessel breach, which for
Seabrook was the time it took to clear out the EPZ based on
their site specific emergency plan, and that was part of the
definition for quite awhile.
We dropped the hour definition in recognition of
the fact that some plants may be able to clear out their EPZ
in two hours or one hour and if they have site-specific
analyses to be able to tighten up their definition of LERF
and not use the conservative definition for Seabrook they
would have that option.
But the philosophy was to provide a surrogate for
a Level III PRA that would expand the range of applications
beyond what CDF could look at without dragging in all the
issues that we have difficulty -- rebed cooling and basemat
melt-through -- and just take a subset of the Level II
issues into the risk-informed arena.
CHAIRMAN APOSTOLAKIS: I appreciate the effort but
the problem I see with that is that somebody may declare
their plant as capable of evacuating within 2 hours
without -- and then that is buried somewhere there and that
may be significantly uncertain.
DR. KRESS: You say how do you know that, whether
you did Level III, how good is your Level III.
CHAIRMAN APOSTOLAKIS: And to base a quantity that
plays such an important role in decisionmaking on these
kinds of assumptions makes me a little uneasy.
I would rather have a definition that depends on
the design, as Tom said, and the accident characteristics,
at least to have some bounds and give maybe some flexibility
because perhaps Karl's point is an important one that you
can't really ignore the fact that they may have very good
evacuation plans, but limit the impact of that. Perhaps
that would be a better way of doing it, because what if
someone says we can do it in an hour, and that is a sentence
there somewhere there in a three volume PRA and, you know,
the whole calculation of LERF depends on that --
DR. KRESS: Depends on that --
CHAIRMAN APOSTOLAKIS: -- and it would be very
hard to touch it.
I would feel better if there were some
recognition, some acknowledgement that these issues are
important because I fully appreciate the arguments you made.
Now there is also a page 128 user definition of
LERF that captures the contributions to the risk of early
health effects, but it seems to me that that has been stated
several times. It is just a matter of editorial cleaning
up, I think.
I think a lot of the discussion in 128 on the
right-hand column is very repetitive. That's your business.
MR. SCHNEIDER: Okay. I will take that into
consideration.
CHAIRMAN APOSTOLAKIS: Maybe we should start using
fuzzy sets, you know -- so dead set against them, but now I
see those definitions.
[Laughter.]
CHAIRMAN APOSTOLAKIS: Are you guys willing to
develop a standard for fuzzy PRA? Say no.
Okay. Are we done with LERF? Well, back to Mr.
Fleming.
MR. FLEMING: Thank you.
CHAIRMAN APOSTOLAKIS: I really hate to work for
an hour and a half without a break.
DR. KRESS: Yes, me too.
CHAIRMAN APOSTOLAKIS: Is our Federal employee
objecting to taking a break now? Okay. We will take a
break now for -- oh, well. How do we define a break without
using a clock? What about 15 minutes.
[Recess.]
CHAIRMAN APOSTOLAKIS: Okay. Back to session.
Karl?
MR. FLEMING: Karl Fleming from the project team.
Before I return to Section 4, I wanted to make a
comment. The cost estimates I provided earlier were for
time and materials and not a fixed price contract.
[Laughter.]
MR. FLEMING: Getting back to Section 4, one of
the comments that we wrestled with from Draft 10 was that
somebody counted up 900 and some odd requirements that had
the work "shall" and we were trying to avoid a frankly silly
exercise where we sit down and negotiate how many "shalls"
could be sent to "shoulds" or "mays" or whatever and it
didn't seem to be a very useful exercise, so what we decided
to do as part of our effort for Rev. 12 was to back up to,
say, 20,000 feet and from the point of view of people who
are competent to perform peer reviews and people who have
lots of experience in PSAs is to boil down these
requirements into a set of irreducible high level
requirements that point to basic attributes of a PRA that we
are all aware of.
These would be attributes such as the completeness
of the PRA, treatment of dependencies, the degree of realism
in the assumptions and the success criteria, the degree of
fidelity with the plant and the PRA model, and how well it
reflects the as-built, as-operated, and design change plant
and so forth and go across each of the nine elements of the
PRA and come up with high level requirements phrased in
"shall" language that everybody would agree have to be
present and form the critical mass of what is needed for the
product that we are going to put the PRA label on, whether
it is Category I, II or III.
One of the tasks that we laid out here, and I will
walk through some examples of those in a few minutes for
accident sequences, is to capture the essence of the
requirements in these high level requirements that typically
are a number in the range of maybe four or six high level
requirements for each of the nine elements, and we used this
as a starting point for organizing and defining the detailed
supporting requirements.
Many of these high level requirements are actually
in Draft 10 but they may be difficult to find because it was
presented in sort of a textual format and we wanted to bring
them out and make them very clear and explicit in this
version.
The concept is that each of these high level
requirements would apply to all three application
categories, but the extent and the context in which you
would apply them would be different depending on the
characteristics that I mentioned in the earlier
presentation.
With this kind of a concept what I would like to
do is actually walk through some of the high level
requirements for accident sequences.x
DR. BONACA: So you are in Chapter 4?
MR. FLEMING: Yes, we are in Chapter 4.
CHAIRMAN APOSTOLAKIS: I was wondering whether the
members had any comments on the definitions and the risk
assessment application process that are Chapters 2 and 3.
DR. BONACA: The Definitions section, you mean?
CHAIRMAN APOSTOLAKIS: Yes. I just got a comment
from Mr. Barton, who could not be here today. On page 10,
unavailability is defined as follows -- the fraction of time
that a test or maintenance activity disables a system or
component, also the average unreliability of a system or
component over a defined period of time.
His comment is the word "unreliability" is not
defined, so there should be a definition of unreliability as
well.
That brings me to another comments, which is a
favorite of mine. This definition I recognize is one that
the industry has been using for a long time, the fraction of
time that a test or maintenance activity disables a system.
It is not consistent with the definition in
reliability theory, which is that the component or system is
unavailable due to any reason at Time T, and this has been
an issue before in other contexts.
What was the last time we had an appendix with a
definition and I didn't like it there either? The
maintenance rule.
It seems to me that if one decides to go with this
definition of unavailability then one would have to have in
the expressions for the probability of the thing not
responding --
MR. BUDNITZ: [By Telephone] This is Bob
Budnitz --
CHAIRMAN APOSTOLAKIS: Okay. We know who you are.
MR. BUDNITZ: Oh. I know who you are too.
CHAIRMAN APOSTOLAKIS: Can you see us?
MR. BUDNITZ: I cannot see you. I am only on a
phone. Are you more gorgeous than usual?
DR. KRESS: Yes. The answer is yes.
CHAIRMAN APOSTOLAKIS: Okay, we can hear you very
well, Bob.
MR. BUDNITZ: Look, I have to be out of here at a
quarter after, which is just over an hour from now.
CHAIRMAN APOSTOLAKIS: Okay, don't worry. We will
be done by then.
DR. KRESS: Do you have some comments you want to
make, Bob?
MR. BUDNITZ: You mean upfront?
DR. KRESS: Yes.
MR. BUDNITZ: Where are you in the agenda?
CHAIRMAN APOSTOLAKIS: We are talking about
definitions. We finished LERF.
I will give you a few minutes to catch up, okay?
MR. BUDNITZ: Yes. I thought I was on because
when you come to expert judgment I am the one.
CHAIRMAN APOSTOLAKIS: We will make sure we do
this before you have to go.
MR. BUDNITZ: Okay.
MR. SIMARD: Expert judgment as well as any
questions about initiating events -- Bob and Steve in that
area.
CHAIRMAN APOSTOLAKIS: Initiating events is coming
up.
So as I was saying, if we adopt this definition,
which the industry seems to be comfortable with, then there
has to be an extra term, probably of failure on demand.
I am not sure that we have that in the
expressions.
Now if you go to standard mathematical books on
reliability, unavailability includes that so I don't know
what the resolution should be.
At some point we have to make sure we have one
definition.
I think the industry refers to the latter, the
probability of failure on demand is unreliability, which
again conflicts with the mathematical definition which says
it is the probability of not performing in a period of time,
so I don't know.
Do the members have any suggestions? Should we
try to change the way the industry uses these terms?
DR. BONACA: Well, for me, not including other
reasons why a system or component is unavailable, it just
doesn't make any sense.
CHAIRMAN APOSTOLAKIS: They may include it in the
calculations. I don't know.
DR. BONACA: I understand that.
CHAIRMAN APOSTOLAKIS: The definitions should
include it, in my view --
DR. BONACA: -- should include it.
CHAIRMAN APOSTOLAKIS: Okay, so we will probably
make a comment to that effect and the committee will have to
decide.
Now speaking of definitions, I also have a
question, but maybe we can wait until -- the human error.
The definition of latent human error, do you want to do it
now or when we talk about human errors?
MR. FLEMING: As you wish.
CHAIRMAN APOSTOLAKIS: Well, it says here on page
8 a human error typically by mispositioning or
miscalibrating a component that if not detected or corrected
predisposes the affected component to failure when demanded.
This is a very limited definition of the latent
error, and I would recommend that you use Jim Reason's
definition of latent conditions and latent, which is any
human action before the actual active error takes place and
not just mispositioning or miscalibrating.
Any other comments from the members on the
definitions?
DR. BONACA: I would like to provide one.
On page 5 on the definition of accident class
there is a use of the word "severe" accidents, and I believe
that is a little bit of a narrow connotation there, somewhat
confusing. I would certainly prefer to see a grouping of
accidents that by severe accidents we indicate very specific
ones but you include the category of transients that are not
necessarily ending up in a severe accident.
Now on the issue of accident consequences, here is
it more of a -- you know, in regulatory space is meant only
doses. I mean is meant only radiological release, and I am
not sure that you may not want to look at that definition
there if it creates an unintended conflict, a confusion.
This I am just raising as a question and would let
the ASME decide what is the proper approach.
At the bottom of page 5, "available time"
specifically talks about time from which an indication is
given that human action is needed to where the action was
performed to "avert" -- first of all, the word "avert" --
but core damage I think again is a very narrow definition
there.
I don't think it is intended only that sense of
available time. I think there are actions that prevent
other events, not only core damage and again it is very
narrow to focus on core damage -- maybe to where the action
was to be performed to achieve success, whatever that means.
You could let it be in the analysis.
The definition on page 6 on containment analysis
needs work. There is some editorial problem there -- no,
that's okay -- definition of extended events on page 6,
again there is always this pointing may lead to core damage
or larger releases, but really you are looking at extended
events in a broader sense -- and again that reference to
core damage early release I don't think that is necessary in
the context of the definition.
CHAIRMAN APOSTOLAKIS: Well, for extended events
though that is really what you worry about. Isn't it? I
mean if you have an earthquake or --
DR. BONACA: Yes. I am talking about there
extended events is initiating event originating outside -- I
mean you may conclude in the analysis they will lead you to
that. You are still defining certain external events.
For example, I could have conceivably a typical
external event for a PRA that you always analyze that in
that particular plant will not lead to CDF or LERF.
It would still be --
CHAIRMAN APOSTOLAKIS: But the concern is that it
might. That is why you analyze it.
DR. BONACA: Yes, but if you look at the
definition -- may lead.
CHAIRMAN APOSTOLAKIS: May lead to the part of
external events?
MR. FLEMING: Internal events. It's in the scope.
DR. BONACA: Page 7 on the harsh environment,
there is a reference to appropriate for design basis or
beyond design basis accidents.
I would rather a definition that does not include
-- not narrow that much. Again, a environment -- as a
result of the postulated accident condition.
I mean, there are some others one, and I don't
want to spend any more time. I will provide them.
CHAIRMAN APOSTOLAKIS: Sure. We'll have an
appendix.
DR. BONACA: Again, the word, unavailability on
page 10, mirrors the comments we had.
MR. BERNSEN: Let me just ask one question. We
would like, wherever possible, to use existing definitions,
definitions that have been published, if they're at all
consistent with our intent. So that if you have some
alternative definitions that have been published, if you
could cite the reference or whatever for them, that would be
very helpful.
CHAIRMAN APOSTOLAKIS: Sure. We will probably
have appendix to our letter with the detailed comments,
maybe line-by-line. I don't know.
One last comment on the definitions which may
involve Bob Budnitz. On page 6, there is a definition of
expert solicitation.
First of all, I would suggest that you change it
to expert opinion elicitation. It's not the elicitation
that's expert; it's the opinion.
And second, it says a formal highly structured and
documented process. Now, if you go to the actual section on
expert opinion, there is allowance for less than highly
structured processes.
So, it seems to me that it's overly restrictive to
define it as a highly-structured process. Above, when you
use the technical integrator, the technical integrator, then
the process is not necessarily highly structured.
It's highly structured when you go to the full
treatment that the technical facilitator, integrator,
demands.
And I think that in Section 6, you make -- I'm
sorry, 4.6, you make that point well. So it seems to me
this definition here should delete -- maybe you can say a
structured formal approach, rather than highly structured.
MR. BUDNITZ: Which definition are you looking at?
CHAIRMAN APOSTOLAKIS: Expert elicitation on page
6. Okay?
MR. BUDNITZ: Yes, you can just take the "highly"
out of there. I understand that point. It's a good point.
CHAIRMAN APOSTOLAKIS: Or maybe completely highly
structured and say a formal and documented process, and you
differentiate in 4.6, regarding the various levels.
MR. BUDNITZ: It's got to be structured, George.
CHAIRMAN APOSTOLAKIS: Okay. Can you have a
formal process that's not structured?
DR. KRESS: Yes.
CHAIRMAN APOSTOLAKIS: Okay. My expert in English
tells me yes. I will not question it. I have questioned it
in the past and have regretted it.
MR. BERNSEN: I would observe that this is a
formal process.
CHAIRMAN APOSTOLAKIS: But it's not structured.
[Laughter.]
CHAIRMAN APOSTOLAKIS: Thank you very much, Mr.
Bernsen.
DR. KRESS: I had a couple of items on the
definitions.
CHAIRMAN APOSTOLAKIS: Sure.
DR. KRESS: Most of mine were covered by Mario,
but on page 6, the definition of core damage frequency, I
wonder why the shied away from the usual connotation that's
per year instead of per unit time, although, you could
define it anyway you want to, but it's usually in the use of
CDF and LERF, it's always per year, per reactor year.
CHAIRMAN APOSTOLAKIS: Actually, in the text
somewhere they say that it's not per reactor year; that's
it's per calendar year. That was a question I wanted to
ask, why-- because you're considering all modes of
operation, so even if the reactor --
But if the reactor is in cold shutdown, do you
really care? I mean, the definition is somewhere, and let
me see if I can find it.
[Pause.]
You're saying it in the text, but --
DR. KRESS: If Dana were here, he's say, yes, I
care.
CHAIRMAN APOSTOLAKIS: Yes, I what?
DR. KRESS: If Dana were here, he's say, yes, I
care, to your question.
CHAIRMAN APOSTOLAKIS: I think there is an
inconsistency between the definition and the text. Karl?
MR. FLEMING: Yes, with respect to the -- I
believe that in the technical requirements for quantifying
initiating event frequencies, for example, you'll see the
need for expressing units in terms of calendar year.
That's just to clarify that the alternative might
be to calculate it per reactor operating year, and then
you're going to be coming up with units that may be
inconsistent with the criteria, you know, all the safety
goals and core damage objectives, and so forth, really are
calendar year.
There has actually been some confusion out there
in the industry about what calculations should be performed.
DR. KRESS: Okay, the other question I had was on
common-cause failure. I thought that defining it in terms
of a short period is a good idea, but it leave me wanting a
little bit more in terms of what is meant by short.
It has something to do with whether the two
failures are close enough in time that they actually impact
the sequence somehow.
And so somebody needs to add a little more of a
definition of "short" in there that I thought it could be
expanded on.
DR. BONACA: I think it's a very good comment.
For example, you may have oftentimes a -- mode failure is
caused by a replacement, say, of a component with a
different material that will lead to the failure later on.
Many of them are latent, and then may develop
themselves in a long time.
DR. KRESS: You to have a certain probability
that's going to impact the sequence.
DR. BONACA: That's right. So that's a good
requirement to clarify that.
CHAIRMAN APOSTOLAKIS: Karl, do you plan to spend
any time on initiating events? I would like to finish
initiating events and expert judgments, so that Bob can be
off the line.
MR. FLEMING: Fine. If you have questions, I did
not prepare.
CHAIRMAN APOSTOLAKIS: Okay, let's finish first.
DR. BONACA: I just have one more comment,
unfortunately, on -- a question, actually. That's why I'm
raising it, on the definition on page 9, under PRA Upgrade.
It says the incorporation into the PRA models of a
new methodology that has not been previously peer-reviewed.
I assume that if I incorporate it into my model, a new
methodology, whether or not it was peer-reviewed, it would
be an upgrade of my PRA.
MR. FLEMING: Yes.
DR. BONACA: Unless I misunderstand what you
meant.
DR. KRESS: It doesn't matter whether it's peer-
reviewed or not.
DR. BONACA: That's right.
DR. KRESS: It's still an upgrade.
DR. BONACA: May I'm missing something.
MR. BERNSEN: We could let Rick answer that, but I
think the intent here is that this is a definition that's
unique to the standard, and, in particular, to the peer
review section where we're talking about what changes in the
PRA need to have a peer review.
So it's kind of -- it's unique to the standard,
and that's why the differentiation. Is that right, Rick?
MR. HILL: Well, actually, I don't think I'd have
a problem with taking out the, "that has not been previously
peer-reviewed."
Yes, it is unique to the standard, but the context
of what an upgrade is, is a change in methodology, rather
than just a change in time phasing like data or something
like that.
I also think that this has not previously been
peer-reviewed, might skew the definition to somebody
thinking, well, this particular methodology has been
reviewed someplace else, so, therefore, it's acceptable
here, without thinking about the application of that
methodology.
MR. WALL: Mr. Bonaca, I'd like to draw your
attention -- sorry, this is E.M. Wall, a team member.
Mr. Bonaca, I'd like to draw your attention to
page 136, configuration control, Section 5, and Subsection
5.4.
We used these two definitions to distinguish an
upgrade from maintenance. For a maintenance, we kind of
even have kind of an internal review. It's for very minor
things.
An upgrade is a major thing which will require
some incremental peer review, pursuant to Section 6.
DR. BONACA: I understand, but still, I mean, I
may decide to upgrade by adding seismic or fire, okay? And
I'm going to use a PRA methodology which has been previously
reviewed. I'm asking somebody to put it in.
That's a major upgrade of the PRA. And so I would
call it an upgrade, irrespective of whether or not that
methodology has been peer-reviewed.
MR. SIMARD: We'll look at that. It sounds like
we ought to delete that phrase and just end the sentence
after new methodology.
DR. BONACA: I don't want to belabor it, I'm just
pointing out that it is something to look at. Thank you.
MR. SIMARD: Okay.
CHAIRMAN APOSTOLAKIS: Are we ready to move on to
initiating events? Since Carl Doesn't have any -- maybe you
can put up the viewgraph you have which is the -- no, it's
on page 21, where it talks about --
MR. BUDNITZ: George?
CHAIRMAN APOSTOLAKIS: Yes?
MR. BUDNITZ: In the sequence of the text expert
opinion. comes first.
CHAIRMAN APOSTOLAKIS: Really?
MR. EISENBERG: No, it doesn't.
MR. BUDNITZ: Doesn't it?
CHAIRMAN APOSTOLAKIS: It's on --
MR. BUDNITZ: It's 4.6. I apologize.
CHAIRMAN APOSTOLAKIS: It's page 135.
MR. BUDNITZ: Of course.
CHAIRMAN APOSTOLAKIS: Are there any comments on
initiating events from the members?
[No response.]
MR. BUDNITZ: Well, that was easy.
DR. SHACK: My one comment is sort of really just
that it does apply that there is an awful lot of detail that
was in the Draft 10 that disappeared from the Draft 12. I
mean, that's common to the whole thing, and it's this
philosophy, perhaps --
I mean, typical ASME standards are fairly
prescriptive, and they provide a lot of detail. You know,
I've heard references here that the philosophy here is to
provide sort of a high level guidance to the peer review
panel who are assumed to be knowledgeable.
And you've omitted a great deal of detail that is
in other guidance documents like NUREG 1602, which was
another attempt to sort of set up guidance for PRA, or the
Draft 10 version.
And haven't you really lost something here in
omitting these details?
MR. BUDNITZ: Well, about initiating events --
this is Bob Budnitz from 3,000 miles away -- about
initiating events -- and I believe that this was something
that was true all the way through --
Remember -- and it's very important for you to
understand that the whole standard is telling the analyst or
the analyst team what to do and not how to do it. What to
do, and not how to do it.
Now, in the course of reviewing Rev 10, there was
some stuff in there that told them how to do it, and I, like
the others, took that out. Actually, I didn't have to take
a lot of it out; it was taken out in the intermediate thing
you never saw called Rev 11.
When a subset of our group took Rev 10, they made
the major, major changes of going to three columns of
requirements, and integrating the NRC certification
requirements with what had been there before to make a
larger list and straightening things out, a whole lot of
detail was taken out that was of the character of how to do
that.
And that was true here, too, however, I don't
think that, unless you find one -- and I'd be, of course,
eager to know -- I don't think there was any what-to-do's
that we lost in the course of taking out a lot of that how-
to-do.
So although it comes up here in initiating events,
it's really a question of philosophy for the whole thing.
It just happens to come up here, first, I suppose.
CHAIRMAN APOSTOLAKIS: Karl?
MR. FLEMING: Yes, to amplify on what Bob just
said, if you go back to Rev 10, on the Section 3, I guess it
was, that had the detailed requirements, the entire content
of all the requirements for initiating events was on page.
All the other material that you found in Rev 10 on
initiating events and other issues like that, was back in
the Appendix, which was in the form of guidance and things
like that.
So, actually, if you look at the detailed
requirements we have in initiating events in Rev 12, there
is actually more here. There is more because we have
integrated in additional requirements that were in the
certification process that were not in Draft 10.
MR. BUDNITZ: But either that one -- it was really
one that had pages, Karl, you're right.
Even there, there was some stuff that was how to
do it, that I then went through and took out, in the spirit
of what we were trying to do with the whole thing.
You see, if there are five different ways to
accomplish a certain thing, we made a decision up front that
it was erroneous for us to prescribe one of them.
Now, by the way, if they're all equipment -- and
no one had ever done a PRA in this area -- it might be
useful to prescribe and have everybody do it the same way.
But, in fact, we've got 100-odd PRAs out there,
that all did it different ways. And you don't want anyone
of them to say, gee, you did it incorrectly, because you
didn't do it the way we told them how to do it.
So, if you're reacting to that, I believe your
observation is completely correct, and we did it on purpose.
CHAIRMAN APOSTOLAKIS: There is some
inconsistency. I agree with you, Bob, that this is a
broader issue than just initiating events.
MR. BUDNITZ: Oh, of course.
CHAIRMAN APOSTOLAKIS: And I was planning to bring
it up when we discuss human reliability analysis, and expert
judgement.
In other words, in some instances, you give more
detailed guidance in the form of references, and in others,
you don't.
MR. BUDNITZ: Well, I --
CHAIRMAN APOSTOLAKIS: It's a matter of being
consistent.
MR. BUDNITZ: Well, without arguing the case, it
is -- if you can point out places where we can give
references that provide a good example of how one goes about
it, why, those are very valuable.
CHAIRMAN APOSTOLAKIS: I mean, I realize that this
particular standard is not really a procedures guide. It
doesn't really give you methods.
MR. BUDNITZ: Quite the opposite.
CHAIRMAN APOSTOLAKIS: You stayed away from it,
and, in fact, one of the criticisms, as you told us earlier,
was that you were too prescriptive in Rev 10.
So, the least we can do then is, when we discuss
HRA, and expert judgment, is to make sure that there we
eliminate the more specific advice that is given, which is
inconsistent with the other chapters.
Anything else in initiators?
DR. KRESS: I had one, George.
CHAIRMAN APOSTOLAKIS: Sure.
DR. KRESS: On page 33, Table 4.4-1(d), under Item
1(e)-D14, we talk about that the frequencies need to be
weighted by the fraction of time the plant is at power. I
think that needs to be made a little more clear that the
weighting goes in the denominator instead of the numerator.
It may be clear to everybody else.
MR. BUDNITZ: By the way, this is exactly the
place where the adjustment is made to the difference between
a reactor year and a calendar year.
DR. KRESS: That's right.
MR. BUDNITZ: That's exactly the point that we
spoke about this about five minutes ago?
DR. KRESS: That's it.
MR. BUDNITZ: This is the only place it's done.
This is the only place where frequency comes in, in quite
this way, right?
DR. KRESS: That's right, and that's why I thought
it need to be made a little clearer as to what you're doing
here.
MR. BUDNITZ: Well, explain -- no sweat. What
wording would you --
DR. KRESS: Well --
MR. BUDNITZ: You're going to tell them to do the
arithmetic right, or something?
DR. KRESS: Well, that's basically that's it.
CHAIRMAN APOSTOLAKIS: We do that in the
introduction.
DR. KRESS: If it's clear to everybody, okay.
MR. BUDNITZ: If it isn't clear --
DR. KRESS: It was clear enough to me, but I
wasn't sure it would be clear to everybody.
I have another sort of comment on page 25 where
we're introduced to key safety functions, which I thought
was --
MR. BUDNITZ: Which requirement?
DR. KRESS: This is high level requirements for
initiating event analyses on page 25 of my version. At the
footnote, we're introduced to key safety functions.
I like that. I liked the list that they have
there, but the problem I have with it is, I hated to see
that relegated only to a footnote. I wish there was a
section in there talking about key safety functions and the
role they play here.
In fact, you see this footnote showing up with
multiple tables, all along through here. I thought that if
you could take care of it up front, and not have to repeat
it every time, it might help the readability a little bit.
But somehow I thought this was too-important a
concept just to relegate to a footnote, and that was the
only comment there.
CHAIRMAN APOSTOLAKIS: Can you give me an example
of a safety function that is not a key safety function? If
I look the way you define them, it seems to me that you
covered everything, reactivity control, core heat removal,
reactor coolant inventory control, reactor coolant heat
removal and containment bypass integrity.
MR. BUDNITZ: Oh, I suppose you'll find -- I
imagine that if I told you that the Center for Disease
Control in Atlanta concentrated on key diseases, I could
probably come up with some minor diseases they don't
concentrate on. I bet there are some.
CHAIRMAN APOSTOLAKIS: No, I'm not sure.
MR. BUDNITZ: Maybe not. Control --
CHAIRMAN APOSTOLAKIS: What you have listed here -
-
MR. BUDNITZ: Inventory, that's pretty much the
whole thing.
CHAIRMAN APOSTOLAKIS: It is pretty high level,
and it sounds like it's all-inclusive, so they are really
safety functions, so there isn't such a thing as a key
safety function.
I mean, the moment you talk about core heat
removal, reactor coolant inventory control and reactor
coolant heat removal --
DR. KRESS: Where would you put flooding the
cavity in that?
CHAIRMAN APOSTOLAKIS: Is that a safety function?
DR. KRESS: I consider it one. Where would you
put operation of the sprays?
CHAIRMAN APOSTOLAKIS: I would say if you deleted
the word, bypass, and you said containment integrity, then
all these things are included there.
DR. KRESS: But they didn't. They had containment
bypass integrity.
CHAIRMAN APOSTOLAKIS: Just because of the word,
bypass, we define a new class of safety function?
MR. BUDNITZ: No, no, these aren't those. These
are functional initiating event categories or categories
that affect these things.
DR. KRESS: Oh, you're talking about initiating
events.
CHAIRMAN APOSTOLAKIS: I think the word, key, is
redundant here. I mean, you really have to try very hard to
find something that doesn't belong there.
MR. BUDNITZ: Well, going once, going twice, it's
out. It also has the phrase, are the minimum set -- well,
these include, at a minimum, X, Y, and Z, so you're right,
there is the freedom to throw something else in there; do
you see it?
DR. KRESS: You're right.
CHAIRMAN APOSTOLAKIS: Anyway, we don't want to
make a big deal out of it.
MR. BUDNITZ: Are you taking notes about these
things, because I'm not.
MR. SIMARD: Yes, we're taking notes and we're
also going to get a copy of the transcript.
CHAIRMAN APOSTOLAKIS: There is a transcript.
MR. BUDNITZ: Enough said; it's done, okay?
CHAIRMAN APOSTOLAKIS: Anything else on
initiators?
[No response.]
CHAIRMAN APOSTOLAKIS: From the members?
[No response.]
CHAIRMAN APOSTOLAKIS: One quick question on page
26 under Transients in the bottom box: And this is now a
requirement that applies to all three categories.
MR. BUDNITZ: Which requirement are you in?
CHAIRMAN APOSTOLAKIS: This is Table 4.4-1(a),
page 26, 331, 331-B, transients, loss of offsite power and
manual shutdowns. Are these the only transients we're
looking at?
MR. SIMARD: Well, we do say that the following
list is not intended to be all-inclusive.
CHAIRMAN APOSTOLAKIS: I mean, automatic shutdowns
for some reason are not a transient?
DR. KRESS: They generally are categorized as
transients.
CHAIRMAN APOSTOLAKIS: Why do we distinguish? Why
the word, manual? Should it be just shutdowns?
MR. SIMARD: Absolutely.
CHAIRMAN APOSTOLAKIS: Bob?
MR. BUDNITZ: I can't remember why that's there.
CHAIRMAN APOSTOLAKIS: Well, maybe you guys can
think about it.
DR. SHACK: That's really one of the things that
really got stripped down compared to Version 10. There was
a much longer list and much more detailed thing in 10 than
there is in 12.
MR. BUDNITZ: Yes. It should say manual and
automatic, just to flesh it out.
CHAIRMAN APOSTOLAKIS: Yes.
MR. BUDNITZ: I don't argue that for a moment.
CHAIRMAN APOSTOLAKIS: Especially since these
things are now part of the performance indicators, right?
MR. BUDNITZ: That isn't relevant to us. We're
doing a PRA.
CHAIRMAN APOSTOLAKIS: To support risk-informed
oversight.
MR. BUDNITZ: In part.
CHAIRMAN APOSTOLAKIS: Why am I arguing with you,
Bob. Maybe now we can go back to Mr. Fleming. You plan to
talk about accident sequence analysis?
MR. FLEMING: Yes.
CHAIRMAN APOSTOLAKIS: Good. I don't know, but
how do we handle this expert judgment? Should we do it now
so that Bob can --
MR. FLEMING: It's your call.
CHAIRMAN APOSTOLAKIS: Okay, let's do expert
judgment. That is on page 155, as I recall. One of the
comments of the Committee was time -- was too detailed and
focused on one approach.
MR. BUDNITZ: Well, it's fair to say that the
original version 18 months ago was ten times as long.
CHAIRMAN APOSTOLAKIS: Yes. The first comment,
Bob, is --
MR. BUDNITZ: With a whole lot of detail that's
just gone, including a long appendix that's gone.
CHAIRMAN APOSTOLAKIS: Yes. Unlike other chapters
-- we're coming back to the earlier comment about
consistency and so on.
You are giving two references here. Other
chapters, or most of them stay away from providing
references.
MR. BUDNITZ: But you notice that they're
permissive, may be used to meet the requirements in the --
CHAIRMAN APOSTOLAKIS: But you know, the moment
you say "may be" in a standard, I mean, that's --
MR. BUDNITZ: No, "may" is a crucial word that is
used in standards to indicate a permissive that is not
required.
CHAIRMAN APOSTOLAKIS: Right, but then --
MR. BUDNITZ: Other approaches may also be used.
CHAIRMAN APOSTOLAKIS: But then the question is
why these two? For example, the second one would seem to me
to apply to high level waste repositories. It's from the
NMSS Branch of the NRC, and it says in the title, in the
high level radioactive waste program.
Why does this belong in a reactor standards?
MR. BUDNITZ: Because it's a method.
CHAIRMAN APOSTOLAKIS: It's a method? I thought
they just reviewed existing methods?
MR. BUDNITZ: Well --
CHAIRMAN APOSTOLAKIS: The first reference is a
method, but the second one really was a review to state the
Branch position and I'm not sure that this helps anybody
here.
And then the big question you're going to get is,
why are you ignoring NUREG 1150? If you are going to put a
Branch technical position on high level radioactive waste
repositories, you are -- you are not citing the major study
that involved expert judgments sponsored by the Nuclear
Regulatory Commission.
MR. BUDNITZ: George, you might know the answer to
that.
CHAIRMAN APOSTOLAKIS: I know.
MR. BUDNITZ: That was written by Apostolakis and
Budnitz and a bunch of other people. If you don't know
that, George and I were the authors of 6372.
The answer, George, is that in 6372, after a lot
of thinking, we rejected some of the methodology used in
1150.
CHAIRMAN APOSTOLAKIS: But we never really look at
what the staff did in NMSS, and I'm pretty sure if we
reviewed that, we would have some comments as well.
My point is that the moment you start putting
references, you get these questions. You know, why didn't
you include this guide? Why didn't you include that guide?
Why do you have this fellow?
I would say it would be probably best to not have
any references at all. Now, that severely limits the
ability of the user to really do something, but it would be
consistent with the rest --
Or, just take out the second reference, which I
think is irrelevant here, and put two or three more. I'm
sure you're going to get this comment about NUREG 1150.
I mean, they went through a major exercise there.
They spent a lot of the Agency's money, and now we are not
even citing them.
MR. BUDNITZ: On 1150, I'm prepared to write a
rebuttal if anybody says that, and I assume that you will
review and tell me I was right about it.
You remember what they didn't do that was right.
CHAIRMAN APOSTOLAKIS: But since you are allowing,
Bob, a graded approach to the use of expert judgment --
MR. BUDNITZ: Sure.
CHAIRMAN APOSTOLAKIS: Surely there is a role for
1150 somewhere there? I mean, in Category II issue, for
example, I mean, we're even allowing the technical
integrator to do it internally to the company, you know,
without even going to outside experts.
In that sense, there must be a role for 1150
somewhere. I mean, I am on your side when you say that in
the full treatment, the first reference we have here goes
beyond 1150. And now it just occurred to me, can we comment
on things that we have co-authored?
MR. BUDNITZ: Well, I am on the phone with you so
we can do what we want. You just stated on the record that
you and I co-authored that, so everybody understands.
CHAIRMAN APOSTOLAKIS: There is Mr. Markley here
who has some views.
MR. MARKLEY: Well, George, to the extent that you
can, you should avoid discussing your own work.
CHAIRMAN APOSTOLAKIS: What I am doing here, Mike,
is I am staying away from the technical content.
MR. MARKLEY: You can state the facts.
CHAIRMAN APOSTOLAKIS: I am just stating that
there are other references that I think belong.
DR. KRESS: Right.
MR. MARKLEY: There is nothing wrong with that.
DR. KRESS: You can provide clarifications, and to
the extent that you are not supporting the reference, you
can actually add to the discussion.
CHAIRMAN APOSTOLAKIS: Sure. And I think it is
evidence from our exchange with Bob that we are not really
getting into the details. My point is that since you
decided to cite the reference, and I think that is
appropriate here, because it is not easy to find these
things, it seems to me you have to cite a few more for
completeness. And especially since your write-up, the text,
does allow for different approaches that involve different
levels of sophistication, if you will. That is all the
comment I have to make here.
DR. KRESS: But when you start adding more
references you always have the completeness problem.
CHAIRMAN APOSTOLAKIS: Well, yeah, but I mean
there are two or three major, like 1150, I mean, for
heaven's sake, it introduced the formal use of expert
judgment to the nuclear safety business. There were lots of
little papers here and there, some of them mine, but 1150
really pulled the whole thing together.
DR. KRESS: But there is a whole science out there
on expert elicitation.
CHAIRMAN APOSTOLAKIS: Sure.
DR. KRESS: With books and texts, and where do you
stop?
CHAIRMAN APOSTOLAKIS: But what I am saying is we
should limit ourselves to things that have been used in the
nuclear business, especially the ones -- I don't know that
the industry has supported any major studies in this area,
but the NRC certainly has. There was another one later on
Level III.
DR. KRESS: Level III.
CHAIRMAN APOSTOLAKIS: Which was in collaboration
with the European Union.
MR. BERNSEN: George, we will certain consider the
comment.
One of the other observations I would make is that
we have talked to Bob about the fact that there was a lot of
valuable material in the earlier drafts that shouldn't be
lost. And I believe he is committed to write a paper which
might be suitable for reference in this issue or some
subsequent issue of our standard. We felt that a technical
paper would be more useful for that type of information
perhaps than a standard.
CHAIRMAN APOSTOLAKIS: That's fine. But, again,
looking at the standard alone, since there are no references
in other places.
MR. BERNSEN: Understood. And we will certainly
consider that.
CHAIRMAN APOSTOLAKIS: Jack?
MR. SIEBER: I was just thinking that that is a
good idea to write a supplementary paper, because I think
that then becomes the tutorial for the application of the
standard, and without it, I think there is something
missing.
MR. BUDNITZ: Well, on the other end, I said I
would do that, but it hasn't been done yet, and it certainly
isn't going to get into this edition of the standard, you
know, because, obviously, you know how long these things
take.
George, by the way, so here is George -- George is
talking about this, and without about arguing about
conflicts, George, offline, let's have a conversation about
what other references might be appropriate here, and I will
give it some thought.
CHAIRMAN APOSTOLAKIS: Sure.
MR. BUDNITZ: And, by the way, just to broaden
this, I can call up and have discussions with two or three
other members of our team, there were seven authors there,
like Peter Morris and so on, and see if I can pull together
an improved little list.
CHAIRMAN APOSTOLAKIS: Yeah, I am not talking
about, you know, 35 references. I am talking about --
MR. BUDNITZ: Certainly. You are talking about
two or three more.
CHAIRMAN APOSTOLAKIS: Two or three key, major
references, you know, that included nuclear related issues.
One or two more comments. I think, Bob, you
undertook a very difficult task here trying to give guidance
as to when to use, you know, the facilitator approach or the
technical integrator, and there is a series of four bullets
on page 135. I would suggest, I mean I understood what you
meant here, but, you know, I have spent three years with you
working on this, earlier on the standard says that examples
will be used to clarify things. In fact, in Section 4.4 on
requirements, there is a series of examples. I would
suggest that on this Section 4.6, you give a few examples of
what you mean by certain things.
For example, are there any Level I issues that
would require a TFI treatment, or would the technical
integrator treatment would be good enough? The one that
comes to mind from 1150 is the coolant pump seal LOCA, where
there is model uncertainty. Would that be a good example?
And maybe that one can be handled by the utility itself,
since you allow them to do that, by a technical integrator.
Then, as you move on to Level II, I suspect for some of
these issues, one would have to do a more rigorous expert
judgment elicitation process, you know, so people will get a
better idea.
I am afraid that this is not clear now, unless you
really have read some of the citations. And especially when
you say, on page 135, 4.6.3, "The PRA analysis team may
elect to resolve a technical issue using their own expert
judgment or the judgment of others within their
organization." Now, if I were a utility person, I would say
this is great, we can resolve all the issues internally.
Maybe we will call up one or two consultants to make sure we
are not doing anything really bad, and then I would not read
the rest.
Why should I worry about uncertainties are large
and significant judgments of outside technical experts are
useful? I mean since you allow me to resolve technical
issues within my organization, I would probably do that.
MR. BUDNITZ: Yes, but read the next sentence. I
will read the sentence you read, "The PRA analysis team may
elect to resolve a technical issue using their own expert
judgment or --" Right. But the next sentence, "The PRA
analysis team shall use outside experts when the needed
expertise on the commission is not available inside."
CHAIRMAN APOSTOLAKIS: Well, I understand that,
and that again --
MR. BUDNITZ: And then there is a "should" which
is sort of in between. It says maybe you have the experts,
there is a "should" which is in between. You got it, but
there are other reasons why you want to go outside.
CHAIRMAN APOSTOLAKIS: But, again, I mean then it
comes down to deciding whether I have the expertise or not,
which, of course, --
MR. BUDNITZ: Yes, but that is always discussion
that is left up to the analyst team. Nobody but the analyst
team could ever make that call. I think that is intrinsic
to this game.
CHAIRMAN APOSTOLAKIS: But all I am saying is --
MR. BUDNITZ: Do you agree with that?
CHAIRMAN APOSTOLAKIS: A few -- no, I agree. But
a few examples, I mean not 10, but two or three.
MR. BUDNITZ: I can cite some examples.
CHAIRMAN APOSTOLAKIS: Of issues.
MR. BUDNITZ: The way I can cite it is I can cite
two or three reports which cover issues.
CHAIRMAN APOSTOLAKIS: Yeah.
MR. BUDNITZ: In other words, the analyst who is
trying to figure out what the hell is what, could go to that
coolant pump seal example, or they could go to, for example,
full elicitation at Yucca Mountain for seismic hazard or
something, to see the whole big, gory thing.
CHAIRMAN APOSTOLAKIS: Well, this is actually a
good example, or 1150. I mean in Level II analysis, there
are all sorts of issues that require expert judgment, right.
MR. BUDNITZ: Right.
CHAIRMAN APOSTOLAKIS: Although I don't know if
you would limit yourself to LERF, or whether there is the
same number of issues. But, certainly, if you do the
traditional Level II, with the release of --
DR. KRESS: There are a lot less issues if you do
it to LERF.
CHAIRMAN APOSTOLAKIS: A lot less. But the
question that would come to my mind would be, if I were a
utility executive, why can't I go to NUREG-1150? They did
all this analysis, maybe I can take their results, use my
expertise in my facility, maybe hire a consultant, and maybe
adopt those results to my plant. So I don't have to go
through this expert elicitation process and all that. I
mean do you allow that reality here, which I suspect a lot
of people would find very attractive?
MR. BUDNITZ: Well, of course.
CHAIRMAN APOSTOLAKIS: Because the idea of going
through a NUREG-1150, it is just out of the question for a
private company to do. I mean it is okay for a federal
agency that wants to gain insight and so on.
MR. BUDNITZ: By the way, of course, it is not
only allowed, it is explicitly -- it is expected, I suppose.
But they do have to get by their peer reviewers.
CHAIRMAN APOSTOLAKIS: That's right. And all I am
saying is by giving two or three specific examples, like I
just did, I think you will make this section much easier for
people to understand and implement.
Also, you don't emphasize enough this community
perspective, which, for a private company, may not be
relevant. Remember when we were doing this other thing, a
very important concern for a federal agency that is looking
at broader issues is what is the community of experts' view
or a spectrum of views on a particular issue? Because this
is a federal agency, they have to regulate 103 units. But
if I am one utility with one or two plants, I probably don't
care about the community of experts, do I? I mean I really
worry about what applies to my facility. Although, of
course, there I can have the community's views. So, I would
suggest that this become clearer.
MR. BUDNITZ: Well, it is right there, the last
paragraph of 4.6.4, probably this whole thing is only, you
know, half, two-thirds of a page. Read it.
"The utility shall be responsible for aggregating
the judgments so as to develop the composite distribution of
the informed technical community."
CHAIRMAN APOSTOLAKIS: Yeah, but what I am saying
is that this is not sufficient to bring up the issue of the
community view. Maybe you can emphasize it a little more.
I mean every word here, every sentence is loaded with
meaning.
MR. BUDNITZ: You and I know that this was 100
pages turned into three-quarters of a page.
CHAIRMAN APOSTOLAKIS: I know. I know. So, maybe
by using a few examples, you can make it a little clearer
and that is all I have. And maybe thinking again about the
issue of references, either eliminate all of them or add two
or three more.
MR. BUDNITZ: I see your point.
CHAIRMAN APOSTOLAKIS: Yeah. Are there any -- I
guess the issue of expert opinion elicitation does not arise
when you do a Category I. I mean it really has to be
Category III, right
MR. BUDNITZ: Well, I mean if you are just having
a couple of experts in, which is not only allowed, it is
probably the most common thing.
CHAIRMAN APOSTOLAKIS: A Category II perhaps. But
Category I, which is --
MR. BUDNITZ: I mean, you know, you have a couple
of experts in, you still have got to follow this, you just
do it in a certain way. Right?
CHAIRMAN APOSTOLAKIS: That's right. I think it
is so short that it probably will not be of great use to
people, but I don't expect individual utilities to really
resort to expert judgment elicitation to a large degree
anyway. I mean this is more like a federal kind of
activity.
Any other members have any comments on this
particular issue?
[No response.]
CHAIRMAN APOSTOLAKIS: Well, I guess we are done
with Bob. Bob, do you have any comments?
MR. BUDNITZ: Yes. I am not sure, and my
colleagues are sitting around the table there, how much more
here -- well, you know, we cut this way down on purpose
because there didn't seem any middle ground between
something that was real short and the whole big banana,
which didn't make sense, it was out of context. That thing
that was in the first thing was out of context, it was as
long as the rest of the standard practically.
CHAIRMAN APOSTOLAKIS: Yeah.
MR. BUDNITZ: I suppose, you know, 25 percent more
doesn't place it too much out of -- you know, doesn't screw
up the balance
CHAIRMAN APOSTOLAKIS: Yeah.
MR. BUDNITZ: And I will see what I can do. Maybe
it is only just a sentence here and there.
CHAIRMAN APOSTOLAKIS: Okay. Have you thought
about eliminating the whole section, or is that out of the
question?
MR. BUDNITZ: Of course we did. I didn't think
eliminating it made sense because without some guidance, you
leave it wide open.
CHAIRMAN APOSTOLAKIS: Okay.
MR. BUDNITZ: I mean you do want to say things
like look at the last one. You do want to tell them who is
responsible. You do want to tell them that they have go to
identify the issue and that they shall go outside. You do
want to tell them they shall go outside when they don't have
the needed expertise, I think, don't you?
CHAIRMAN APOSTOLAKIS: Yeah. Mario.
DR. BONACA: Just I want to make sure before Bob
goes, I want to pick up again on something we talked about
before.
CHAIRMAN APOSTOLAKIS: Okay. Are we done with
expert judgment? I think we are done.
MR. BERNSEN: Just let me make an observation with
regard to this. I mean our primary purpose here is we are
responding to comments received.
MR. BUDNITZ: Yes.
MR. BERNSEN: And in that context, I don't recall
any comment that said, take it out. I do think we had
comments that said it seemed to be over-weighted in terms of
the total approach in the standard. And yet, obviously, you
want to recognize that this is a part of the process and
must be considered. The user is obligated to address, and
the peer review team has the opportunity to evaluate. So, I
don't think we should take it out.
And, obviously, your comments on the examples and
detail and references are quite appropriate. But we did not
get any suggestion that said delete it
CHAIRMAN APOSTOLAKIS: And you are not getting one
now either. I am not suggesting that.
MR. BERNSEN: Right.
CHAIRMAN APOSTOLAKIS: I just asked a question
whether you have considered it.
I think this kind of exercise is really foreign to
most utility practitioners. I don't think they will go
through this thing, unless somebody is about to shut down
their facility and there is a major seismic issue, and then
it is a different story. But in their routine application
of PRA, I doubt it very much. But I agree with you, there
should be some guidance.
So let's go on to the other issue that Dr. Bonaca
has.
DR. BONACA: Yeah, I just had -- I wanted to pick
up on the issue, Bob, of what you described before, that is,
from Rev. 10 to Rev. 12, you really took out how to do it.
MR. BUDNITZ: Well, that is not fair. And the
others in the room can elaborate. Sprinkled throughout all
of this text are how to do it, you know, at different
levels. Because sometimes you couldn't describe what to do
without telling them how. You know, there has to be a
flavor of that or it can be sterile in some places.
But where there were five ways to do something for
sure, it was erroneous to tell them how to do one of those,
you see.
DR. BONACA: Yeah. And I don't disagree with the
approach you have taken.
MR. BUDNITZ: Do people around the table agree
with me, my team members there?
DR. BONACA: Well, let me finish my question.
MR. BUDNITZ: Yeah.
DR. BONACA: At least I don't disagree that you
should do that. I am only considering some cases not always
to do one thing are equal, I mean some of them leave behind
some problems. And, you know, I have heard time and time
again from PRA practitioners, you know, a discussion about
the method, because if you follow a certain method, then you
have to do something else later on to back up some possible
shortcoming in the approach. And I think the original
version we reviewed, Rev. 10 had some of those elements
inside that
Now, I am not saying Rev. 12 doesn't have it,
because I haven't performed that kind of evaluation. I only
see that, you know, I am just concerned that -- it seems to
me, okay that a lot of the burden now has been placed on the
back of the peer review process, that is supposed to make
sure that all this possible, you know, pitfalls in the kind
of approach you use are being dealt with. Am I correct or
not? I don't know. That is a question.
MR. BUDNITZ: Does anybody else want to try to
answer that, too? I don't know where the burden is. The
burden is always on the analyst to do it right, and for the
peer reviewers to check, isn't it?
DR. BONACA: Well, I mean --
MR. BUDNITZ: It is just like a running a reactor,
the burden is on the reactor operator to run the reactor.
The NRC can't.
DR. BONACA: Let me explain to you why I have got
a problem, okay.
MR. BUDNITZ: No, I understand.
DR. BONACA: No, no, no. No, let me just finish.
I have got a problem because you keep saying that this
standard applies to -- as a standard is going to be used by
the power plants and there is a full process here that, in
fact, describes, you know, the utility use of this. And, in
fact, the peer review process is also very focused on
utility use. Most of these utility people did not perform
the PRAs. PRAs were performed by, in a lot of cases, by
specialists who were not participated in the peer review
process, who are not parties to this. And so those kind of
capabilities are not being applied in this review process.
That is why I am raising it. That is the only reason.
I would have no problem if all this is going to be
administered by, you know, top, experienced industry
practitioners. This is going to be applied individual
utilities, typically, with one or two PRA people, that is
the whole staff they got, plus a lot of other people in
their expert panel, and that is why I am raising these
questions.
And there may be an excellent answer. I just said
it seems to me that there has been a significant shift, and
maybe it is not significant, but some shift of
responsibility to the peer review process.
CHAIRMAN APOSTOLAKIS: Well, I think the issue you
are raising, Mario, is really much bigger, and it comes back
to the earlier discussion we had on whether this is a
procedures guide or a high level guidance document. It
seems to me it is inevitable that this will happen, what you
just described, because they have to stay away from actually
prescribing methods. In fact, even the previous version,
Rev. 10, that we saw was criticized as being overly
prescriptive.
And I have a comment, for example, on the common
cause failure part here that lists five methods, and without
any qualification, and I think one of them should not really
be there at all. So, you know, unless you are an expert in
that field, unless you are Karl Fleming, you will not really
know which method to use.
DR. BONACA: And I want to point out, --
CHAIRMAN APOSTOLAKIS: I would rather eliminate
all five.
DR. BONACA: -- George, in fact, I didn't disagree
that this could be done. And I am not not supportive of
this. I am only making a statement that, in my judgment, I
see some shifting of the role to the review process, or the
peer process, which, in fact, you have. And maybe we will
talk about the peer process, review process later.
CHAIRMAN APOSTOLAKIS: We will. But I think you
see that in many, many applications like Option 2 and so on.
And it is inevitable. When you really don't trust the
numbers or the models to guide you to a decision, you have
to rely on the judgment of people. And this is what is
happening here, too. It may be in a different context, but
it is the same thing.
DR. BONACA: Yes.
MR. BUDNITZ: Mario, let me talk about how this
standard differs from most standards that ASME puts out.
And I will tell you that I just had this exact same comment.
I am chairing the group that is writing, under the American
Nuclear Society, the seismic external hazards piece. And we
had a meeting of our oversight committee just last week and
it came up, the exact same question.
One of our committee members objected strongly to
what we had written because it wasn't prescriptive enough.
And I will just read you an example, but I am going to read
it from initiating events. If you have initiating events,
the first high level requirement says, I will read it to
you, "The initiating event analysis shall provide a
reasonably complete and appropriately group treatment of
initiating event categories." Okay.
Now, let's focus on the word "reasonably." Shall
provide a reasonably complete treatment of the stuff.
Right. Now, and the same thing came up in the seismic PRA,
but it is generic throughout here. The person said, what do
you mean by reasonably complete? I mean, you know, when you
are telling people how to design, how to do a calculation of
the stress on a piece of metal, you don't talk about
reasonably complete, you give them a method.
And we had this discussion, and, of course,
everybody understands in PRA that if we ever said the word
"all" or "complete," it is death. There is no "all" and
there is no "complete" in PRA, because you have to screen
things out that are unimportant compared to other things.
So, you have to use the word "reasonably." And then you
come -- what is reasonably complete? Well, the analyst has
to make that call, and the peer reviewers have to agree with
it. You are stuck with that, there is no way around that,
Mario and the rest of you, in my view. And I hope everybody
in the room will nod at what I am saying.
It is that sort of judgment that in the end --
that is why this standard, this whole area is different than
most ASME standards or ANS standards that one writes in
other areas.
DR. BONACA: And, Bob, I totally agree with you, I
understand it. I am only --
MR. BUDNITZ: So it is dilemma that we are in from
the start.
DR. BONACA: Yeah. But this sets certain parts in
my mind about the peer review.
MR. BUDNITZ: Of course.
DR. BONACA: I know that when I was still -- and I
knew we had the peer reviews of a number of PRAs, and I am
going back in my memory to see what the qualification of the
people were. Just because -- and probably they were
adequate. I am only saying that I didn't even think about
the peer review when I was reviewing Rev. 10, and maybe that
was my problem, but now I am thinking about it more
thoroughly.
CHAIRMAN APOSTOLAKIS: Okay. Any other comments?
I'm sorry.
MR. BERNSEN: Just one. I did look at the changes
from 10 to 12 fairly thoroughly in the initiating event
area. And substantially, most of 10 is there. A lot of the
narrative stuff may have been deleted, but the what
requirements by and large remain. In the initiating event
area, there wasn't that much change.
MR. BUDNITZ: Wait, wait, wait. There was no
requirement in Rev. 10 that isn't here.
MR. BERNSEN: What I mean is, and Bob did the same
thing independently, so --
MR. BUDNITZ: There is absolutely no requirement
that is missing.
MR. BERNSEN: Right. Okay. I guess I stand
corrected. That is true. And so, I don't think there is a
major change, in fact, from -- going from 10 to 12 for this
particular element.
DR. BONACA: No, I am only mentioning that in some
cases there were examples. Remember, you can use this
approach or you can use that approach. And if you use this,
you should be doing also this and this and that. There was
that part.
MR. BERNSEN: That was kind of -- it was in 10 as
well.
DR. SHACK: It sort of comes back to George's
question. You know, if you look at (a)(4) in 12 versus the
list in 3.3.1.1 of 10, there really is a great deal of
difference in the list. You know, you get the transients
and the LOCAs in (a)(4) and you have a much more descriptive
sort of thing in 3.3.1.1. And the question is why just
these, you know? You know, should you have dropped them
all, or, you know, included them all. And it is, you know,
it is another one of those things. Where do you stop? And
it is just curious, I guess is the answer.
MR. BUDNITZ: Well, that is a hard call. If you
are looking 1(e)(a)(4), which is 3.3.1.
DR. SHACK: Yeah.
MR. BUDNITZ: Well, it says at the bottom this is
not intended to be all inclusive. So, you know, it was a
shot, okay.
DR. SHACK: Well, the other list wasn't all
inclusive either.
MR. BUDNITZ: Right. Right.
DR. SHACK: You know, there is no all inclusive
list.
MR. BUDNITZ: Right. That is what I said three
minutes ago.
DR. SHACK: Right. And, you know, the question
is, where do you stop when you give the example list? The
one in (a)(4) seems awfully abbreviated, I guess is sort of
my just general gut reaction. And why give such an
abbreviated list, you know, versus what was in the 8, the
10?
MR. BERNSEN: It is distributed to other elements.
In other words, if you go back to 1(e)(b)(3), so on and so
on, it is distributed. It is still there, but it has been
redistributed.
MR. FLEMING: I think it is a valid comment, but
the discussion that resulted in the solution that we adopted
in Rev. 12 was the following, that as you go from a general
list or a larger list to the appropriate list for a
particular plant, one ends up with the unavoidable
conclusion that the appropriate set of initiating events is
plant-specific. So, we worked on the requirements, the
supporting requirements, put a lot of emphasis on needing to
resolve the dependencies, the plant-specific details and so
forth, to come up with the appropriate list before you are
done.
The more we worked with more detailed lists, the
more we got arguments about, well, this is more of a PWR
list and not a BWR list, and it doesn't belong in my plant,
and so forth, so I think we all agree that one needs a
detailed list of initiating events to support the PRA but
putting a -- we did not want to promote the concept of using
a standard list of initiating events.
That is why we did it the way we did it.
MR. BUDNITZ: Yes, Karl. We could, for example,
have taken the, what? -- 12 or so things on this list, we
could have made this 62, but then you might convey the
flavor to the initiated that if you do all those 62 you are
done, and that is exactly the wrong flavor.
CHAIRMAN APOSTOLAKIS: I think we have exhausted
the subject and it is a broader comment and --
DR. BONACA: Yes, we have.
CHAIRMAN APOSTOLAKIS: Anything else?
[No response.]
CHAIRMAN APOSTOLAKIS: Okay, Bob. Thank you very
much.
MR. BUDNITZ: Okay. I'll ring off.
CHAIRMAN APOSTOLAKIS: Okay, bye bye.
MR. BUDNITZ: Thank you.
CHAIRMAN APOSTOLAKIS: And maybe we can also take
a short break until maybe just before -- until 11:18.
[Recess.]
CHAIRMAN APOSTOLAKIS: Back in session. I don't
have a quorum.
MR. MARKLEY: Your quorum -- 13 --
CHAIRMAN APOSTOLAKIS: All right. Okay, back to
Mr. Fleming.
MR. FLEMING: What I wanted to do at this point if
it is convenient for the committee is to go through some of
the details of the accident sequence element to bring out
some additional aspects of the structuring of Rev. 12 of the
standard and make some key points that we haven't really
stressed up to now.
In addition to the three column, three category
approach, which is of course where the detailed supporting
requirements are listed, the other feature of this draft of
the standard was the derivation of high level requirements
for each of the elements.
These high level requirements I believe are very
important, a very important enhancement to the standard from
the following perspectives that I will get into.
The process we used to develop them was to start
with the basic attributes of a PSA -- completeness,
dependencies, fidelity -- those types of issues and look at
those in the context of the particular objectives for each
element, which are also listed for each of the elements in
the standard and come up with an irreducible set, minimum
set of requirements for a PRA, for any category of
application that would have to be met.
This for example, the slide I have up here right
now, is on page 38, which is the high level requirements for
accident sequence analysis.
MR. EISENBERG: Page 39.
MR. FLEMING: Okay, it may be on page 39 in your
copy.
Table 4.4.2 -- this is the style in which we
presented all of the requirements for each of the nine
elements in the standard.
What we have done is developed from a high level
perspective really the fundamental requirements that have to
be met.
On these requirements we worked very hard, and I
think the degree of consensus that the project team and the
other industry groups that peer reviewed and provided input
to this process, the degree of consensus that was reached at
this level is much stronger than actually was a reality at
the functional requirement level.
I, for one, my own personal opinion is that if a
peer review team consisted of appropriately experienced and
competent practitioners in PRA they could take these high
level requirements and go in and perform a very good, sound
peer review process to determine the quality of the PRA.
We wrote these down for several reasons, one of
which is to show the context and logic for all the
supporting requirements.
To the extent that people have optional approaches
to do common cause, data, whatever, and to the extent that
someone may not have exactly followed some of the supporting
requirement tables that we have, the judgment, the yardstick
on which you should judge the adequacy of an alternative
approach would be with respect to these high level
requirements.
I wanted to mention these, because this is a very
key addition to what we had in Draft 10. While many of
these concepts were in Draft 10, they were kind of buried in
the textual presentation and we brought them out as very,
very explicit requirements.
The detailed requirements are then organized by
each of these high level requirements.
I might just pick a page of these for accident
sequence definition. As you go into the specific supporting
requirements for accident sequence, and this happens to be
several pages in, for Requirement B, which shows you the
style in which we have presented all these supporting
requirements, we have carried down into these detailed
requirements -- at the top of the table a reminder of what
functional requirement we are supporting.
This particular page happens to be Functional
Requirement B on plant-specific CDF and LERF quantification;
the accident sequence analysis shall provide a sequence
definition structure that is capable of supporting a plant-
specific quantification of CDF and LERF by the Level I-II
interface.
That is the functional requirement that all of
these particular detailed requirements refer to.
The second thing that we remind people of is that
as we apply the high level requirement and the supporting
requirements we are applying them against the attributes
that are the column headings in these tables, where we
repeat the particular attributes for this element, which
provides the scope of applicability of each of the detailed
requirements.
As you read these particular -- these are the same
ones I showed in a previous table -- a common theme here is
that Category 1 is focused on the dominant sequences and
contributors to core damage frequency and large early
release, whereas Categories 2 and 3 have to be extended to
the risk significant accident sequences and contributors.
In recognition of the fact that some Category 3
applications may have to go beyond risk significant accident
sequence contributors we dropped that caveat with -- you
know, it is sort of an implicit way of saying something more
than the risk significant and accident sequences.
Then as you go down into the specific
requirements -- now we are into the very, very details of
the supporting guidelines, one thing that you will see here
is that for some elements like initiating events and
accident sequence definition, for example, these elements
are so fundamental to the overall structure of a PRA that
you won't see a great deal of delineation of specific
requirements coming across the columns.
The example that I threw up here, in most cases
what you will see in this differentiation is a permission of
the application of conservative models in Category 1 and a
lack of tolerance of conservative models in Categories 2 and
3.
Then based on the judgment of the project team,
there is in some cases a softening of the language, the
action statements that we are using that are specifying the
detailed supporting requirements.
The choice of the verbs was considered very
carefully but using the judgment of the project team as to
whether something ought to be included in the model or
whether it should just be considered in the model, and that
is another example of differentiation.
When you get into other elements like data
analysis and quantification and HRA you will see quite a bit
more differentiation of detailed requirements across the
three columns, and the classic example is what we were
talking about earlier is that we go from point estimates,
mean values, to full uncertainty quantification, as one
particular type of example.
But then finally we end up for all of the
elements, and if I am right -- I'll get the particular
pages -- the particular requirements for documentation.
In general you will find, and this just happens to
be the table on page 51, which may be 52 in your package,
you will generally find the documentation requirements to be
common across all three elements.
The practical reason for this is that in order for
a peer review team to even determine what category of
application the PRA can support it is necessary to have that
document so we can measure what is in there, so you tend to
see particular aspects of that.
The other thing that we tried to avoid and we
debated at length in the preparation of this draft of the
standard, we tried to avoid having let me say "buzzwords"
like "shall" or "should" or "may" trigger some kind of an
automatic documentation requirement, so what we tried to do
is for each element to have very specific documentation
requirements for that element that is in the judgment of the
project team is necessary and sufficient for a peer review
team to come in, read that documentation, determine the
category of application that that element is capable of
supporting, and whether it meets the intent of the
requirements.
You will see a very, very specific long list here.
In fact, I think if you look at these documentation
requirements you will see that by and large they include the
documentation requirements that were in Rev. 10 as well as
additional documentation requirements we had in the
standard.
So that is -- you know, the purpose of this part
of the presentation was to just walk you through a little
bit more of the structure to point out the role of the high
level requirements and the way in which we'd carried down
the high level requirements and the attributes for each of
the three categories, so that each time one is reading a
specific requirement in the tables, they can provide the
context, they can grasp the context of that requirement and
what it was intended to achieve in interpreting how far to
implement it.
Those represent some of the structural changes and
enhancements that we tried to put into this draft of the
standard, again exclusively motivated by the comments that
we received on Draft 10 to make the standard easier to use
for a range of applications.
That pretty much concludes what I planned to
formally present as far as the standard, so if you have any
other comments --
CHAIRMAN APOSTOLAKIS: It appears to me that
Category 1 is really separate from the other two and the
distinction between Categories 2 and 3 is really a very fine
one.
If you look at the various requirements you are
imposing, usually you ally both of them to Categories 2 and
3, with some exceptions.
Perhaps -- I think this is a very important issue.
Maybe the subcommittee can debate it a little bit. The
categories are discussed on pages 3 and 4 of the standard in
terms of examples.
In Category 2, give examples, typical
applications, risk informed prioritization of GL96-05,
periodic valve verification testing requirements, risk
informed inservice testing, risk monitoring applications,
quality assurance, and tech spec modifications, it appears
that most of these, if not all, really rely on importance
measures to rank SSCs, don't they?
Then if you go to Category 3, you also have PRA
products are used to prioritize and rank SSCs with respect
to safety significance. Well, if that is the case, then why
when I do a GQA I have to prioritize, rank SSCs, so which
category does that belong to?
On the one hand, you are telling us this is
Category 2 but then in Category 3 you say that when I rank
SSCs I have to do a Category 3 analysis.
This issue came up yesterday in the other meeting.
I guess there is an implicit assumption here that the
importance measures are fairly insensitive to a full
Category 3 treatment, that you can get a reasonable ranking
without going to the details of Category 3, which is an
untested assumption, and perhaps somebody ought to test
that.
Maybe for RAW that is valid because you make such
drastic assumptions, although the remaining is still, you
know, sensitive. I mean you set a component down but the
rest of the stuff is at their nominal values.
I don't know that what you have here on pages 3
and 4 is the best description of what the categories are and
whether -- I mean there is no statement here regarding the
degree of confidence that one has to have in the PRA results
and how that degree of confidence really dictates how
sophisticated your analysis should be, but isn't that really
what it comes down to?
MR. FLEMING: Yes, that is very important --
CHAIRMAN APOSTOLAKIS: And that is related of
course to approaching the forbidden region in the diagrams
of 1.174, increased management attention, that the Staff
uses.
I wonder whether these kinds of thoughts can be
reflected on these, that what really matters is the degree
of confidence that is required in the calculations to
support the application and then you go on to the examples
and so on.
DR. SHACK: The one that struck me as funny here
was the A4 Category 1, where --
CHAIRMAN APOSTOLAKIS: Yews.
DR. SHACK: -- in A4 you are looking in
combinations that may be rather unusual and different, and
yet it is often in the category where you think you are
almost looking at the generic PRA and that just struck me as
a kind of an unusual way for that.
I would have characterized the A4 applications as
somewhat different, where you are really looking for some
unexpected, surprising interactions.
CHAIRMAN APOSTOLAKIS: Which comes back to Mario's
surprise. Karl?
MR. FLEMING: Yes. I think the particular --
sorry, I lost my fuzzy there -- back to fuzzy sets.
I think that is a good comment. I think the reason
why something like A4 was placed in Category 1 in this
document is based on the fact that A4 is a rule that was
imposed on utilities and of course the utilities were
expected to use their existing PSAs, whatever their existing
PSAs, they had to implement that rule, but there was really
no requirement at least from a regulatory perspective that
they employ the kind of elevation of a PRA, which would be
expected, like, say, a Reg Guide 1.74 application, so I
think that is sort of the motivation for putting it down
there.
It was a rule that they had to fulfill anyway and
there was not a formal requirement by the NRC that you have
to do a quality -- you have to upgrade to a certain level of
quality of PRA before you can implement the rule, so I think
from a technical standpoint your point is well-taken.
MR. SIMARD: We have had a number of
comments these are not good examples and we need to revisit
them.
One example though that seemed to resonate at the
workshop yesterday, Karl, maybe you could summarize the
discussion about how given that you have a Generic Letter
88-10 MOV testing program, the distinction between
Categories 1 and 2?
MR. FLEMING: Right. Yes, we had a very good
discussion on this and I think it did provide us with some
good insights on how we can improve the discussion on these
pages in response to George's earlier comment.
For example, Generic Letter 89-10, we would take
the position that use of the PRA to apply risk ranking
quantifications to your MOV list just for the purpose of
deciding which ones you are going to test first in
fulfilling the 89-10 requirements without impacting the
scope of the valves in 89-10 would be a Category 1
application, but if you wanted to say that, hey, we only
wanted -- we are going to exclude certain requirements from
scope and now we are actually bending or modifying or
proposing a relaxation of the rule on a plant-specific
basis, now we are talking more of a risk-informed
application a la 89-10 in which we would actually have to
quantify the risk impact of the part of the rule that we
weren't planning to fulfill.
While the risk ranking might have been a useful
prioritization, we would have had to elevate this to a risk-
significant determination to show that the delta risk
associated with modifying how we are going to apply the rule
is justified, so that would be how we would make those
particular distinctions.
DR. BONACA: I would like at some point to go back
to the Category 1 --
CHAIRMAN APOSTOLAKIS: Well, this is the point.
This is the time.
Go ahead.
DR. BONACA: Well, it seems to me that one
fundamental requirement I believe is that the PRA must be
commensurate with the change that it supports. That is a
fundamental element of the whole standard here.
It seems to me that there is a prejudging by
saying that Category 1 specifically addresses SSC risk
significant determination for the maintenance rule and A4.
There is a prejudgment here.
Will it be true that the utility still has a
responsibility to assess his own PRA, determining in fact
for example that the component is described and all that
kind of stuff, and look at the dependencies and if, in fact,
dependencies are not, then maybe he has to upgrade the whole
system.
Now the concept is within the standard. I
understand that, okay? -- but by reading this under Category
1 it seems there is almost a prejudgment that anything that
meets Category 1 it will be adequate for doing this support
of the maintenance rule.
Do you feel that it is supportable this way,
particularly for the issue of dependencies, which were
sallow at times, as we discussed before, because in the
application of the maintenance rule that is exactly what is
happening out there.
There are very at times innocuous pieces of
equipment which are removed from service. They appear to be
innocuous. They may be in the air system. They may be, you
know, systems which are not safety grade typically.
MR. FLEMING: Right. There is one aspect of your
comment, Dr. Bonaca, that was not intended by the project
team, and that is the treatment of dependencies.
I believe that, and we will have to go back and
check this, but I believe that in the phrasing of the
requirements for treatment of dependencies which would show
up in a number of places including initiating event,
sequence definitions, systems analysis, probably
quantification and LERF, but as you go through and look at
the detailed requirements I don't believe that we let the
PRA staff off the hook, so to speak, in treatment of
dependencies with the exception that we may permit a
conservative treatment of dependencies for some of the
Category 1 applications.
I don't think that we really intend to avoid the
need to find all the system interaction and functional
dependencies and common cause dependencies and so forth even
for a Category 1 application.
I think in our upgrade of these pages for the
final draft that we would consider in light of these
comments, I don't think that we intend to make these
assignments of applications on an exclusive basis in the way
in which it is arrange in the standard right here and right
now.
There may indeed be an A4 -- applications of A4
that in the way in which they are implemented at a
particular plant really call for a Category 2 or 3 of PRA.
DR. BONACA: I guess what makes me uncomfortable
is singling out the maintenance rule and A4 as a specific
application in Category 1, when I do believe that the
configurations you may end up with by pulling out equipment
out of service at power, it would be more challenging than
other changes more formal that you are making, in part also
because for more formal changes you do have a more thorough
process. They have more time --
MR. FLEMING: Right.
DR. BONACA: -- and other things, so I think you
have to be -- I don't know, this just seems to single out
maintenance rule and A4 as a less challenging situation and
I don't think it is.
MR. FLEMING: There is one other important caveat
that perhaps we haven't been too clear on, and that is that
it makes A4 a particularly interesting example to discuss
because this standard and these requirements are really only
covering the annual average CDF, LERF part of the PRA.
All of these new issues that come into play for
the time-dependent risk monitoring applications that
differentiate above those different -- this standard does
not really go into that. It is really outside the scope of
our standard, and I think for that reason maybe A4 would not
be a very good example.
CHAIRMAN APOSTOLAKIS: But you have an example
under Category 2, risk monitoring application. That is not
part of the standard?
MR. FLEMING: I am saying that when you look at
the technical requirements for initiating events and
accident sequence and so forth, we really do not go into the
additional requirements that you would have to have in there
to do risk monitoring applications.
For example, time dependent initiating events --
the details of what is in the standard right now don't go
into the additional technical issues that come into play
when you do time dependent risk monitoring in applications.
MR. SIMARD: Well, I think it was a good idea to
put these examples in because it has brought out some really
good discussion here, but I think what I am hearing is that
we need to go back and reconsider whether we have any
examples at all, because of Dr. Bonaca's point about
prejudging the outcome.
The other thing I am hearing from the past day and
a half is that we need to go back and make even clearer our
expectations here that, first of all, experience with the
certification process shows that as you look at the various
subelements of a PRA you will find for every PRA -- I think
every PRA that has been looked at some of the subelements
are grades 2, some are grades 3, some are even grades 4, so
we have found the spread among existing PRAs that would
roughly correspond to the three categories.
Second, we need to make clear that what we are
talking about now does not describe the PRA, it describes
the attributes of an application. And for a specific
application, our intent is to go through the PRA subelement-
by-subelement, and for a particular subelement determine
what level of capability that application calls for.
So, I think we need to do a better job of
reinforcing the point that this does not describe the PRA.
We are not giving an overall grade, for example, to the PRA.
DR. BONACA: I wanted to just point out, first of
all, I really want to be reasonable. But let me give you an
example of what my concern would be. My concern would be I
have a very simple PRA and I am going to take out of service
two safety systems. Okay. And my PRA recognizes them, pull
out one and two. Then I have some component in a support
system is not safety related. Nobody recognizes the safety
role to it, so, therefore, we take it out of service because
it doesn't fall into the maintenance rule listing or
anything like that. And we know that there are dependencies
out there.
Okay. Now, typically, the operations people are
pretty smart. At times, they don't see it. We are all
human. And so that is the scenario under which, you know,
the maintenance rule still has a lot of experience to -- you
know, as far as online maintenance to be developed, and so
it is not such an easy application. So, anyway, it is just
a comment.
CHAIRMAN APOSTOLAKIS: I would like to come back
to what Mr. Simard just said. You said that you look at
what PRAs are out there and they roughly correspond to one
of the three categories. It seems to me that this should
not be a criterion for defining the categories, because you
should go one step beyond that and ask yourself, has the NRC
staff, or have these PRAs of the various categories been
actually used in some of these applications successfully?
Because it is true perhaps -- no, I am sure it is,
that there are Category I PRAs out there, but have they
actually been used in risk significance determination for
the maintenance rule? And has the NRC staff said this is
good enough? That should be the criterion, because the fact
that the PRAs exist out there independently of their use in
the decision-making process really doesn't tell us very
much. So, I wonder whether that is the case, whether
anybody came here with an IPE that was what we call now
Category I, they submitted an application and the staff
said, this is good enough.
MR. FLEMING: In the effort that went into the
original development of the industry peer review
certification process that was originally sponsored by the
Boiling Water Reactors Owners Group, now all the Owners
Groups have picked up on a variation of this process, the
information that existed at that time with respect to
different plants' successes and failures with risk-informed
applications was taken into account in the definition of
these. These categories were originally defined in the
certification process.
And as a matter of fact, the great success that
the South Texas project had provided many of the examples
for the checklist that were developed to differentiate, at
least with respect to some elements, the category -- what is
now the Category III applications, for example. So that,
the information that did exist with respect to the track
record of PRAs using the decision-making process, went into
the original definition of these categories.
CHAIRMAN APOSTOLAKIS: The question is whether the
staff now accepted these applications. And South Texas
perhaps is not the best example because they have a very
good PRA. Level III, right?
MR. FLEMING: Right.
CHAIRMAN APOSTOLAKIS: A Category III. So they
can support the arguments that they may want to make using a
very good PRA. But I think that should be really a good
test, because just because there are PRAs out there that are
of the three categories doesn't really mean very much unless
they have been tested in a real risk-informed decision-
making environment, which means the NRC staff has reviewed
them and said, yeah, for this application, this is good
enough. And I don't know of any case where the staff did
not actually go to Category III and raise questions.
Now, that may be temporary because we are all
learning, but --
MR. FLEMING: But, again, at the time these
categories were initially defined, there was quite a large
database of experiences at South Texas and other plants with
risk-informed applications that had a track record of
success that provided an information base on which to define
these categories.
CHAIRMAN APOSTOLAKIS: In other words, has anyone
used the Category I IPE to satisfy the (a)(4) requirements?
MR. FLEMING: Well, the definition of that
category was defined to capture those activities which the
industry was, in general, using, but was not subjected to an
additional special peer review process of their PRA to
verify the application.
CHAIRMAN APOSTOLAKIS: The reason why I think this
is very important is because it is my understanding that
there is a Presidential directive that all federal agencies
should use national standards to the maximum degree
possible, correct?
MR. MARKLEY: OMB Circular A-119.
CHAIRMAN APOSTOLAKIS: There you are. So, one
possible misuse of this might be that, you know, if it
becomes a national standard, a licensee may come to the
staff and say, you know, you are not following the OMB
directive because you have to follow the standard, and the
standard says that for (a)(4), I can use a Category I
analysis. So, now we are really changing the process.
Instead of establishing the categories based on a mutual
interaction between the staff and the industry, now we are
trying to impose on the staff certain limitations as to what
they can ask and what they can expect.
So, I think this is a very critical point here
when it comes to the categories.
DR. BONACA: Because I think it is important to
note, also, regarding the online maintenance, utilities who
are making -- who were doing online maintenance before, at
times they were performing evaluation with their IPEs or
PRAs, many of them. So there has been a backfitting and
they have been using whatever they had. And so, I don't
think we want to, you know, make the standard endorse this
necessarily, force this on the NRC, an acceptance of a
process.
CHAIRMAN APOSTOLAKIS: Yeah, I mean it should not
impose limitations on what the NRC staff may want to do.
MR. SIMARD: Can you help me understand your
concern? Because if we eliminate any pre-judgment, if we
eliminate the statement that this particular application
fits Category I and so forth, all we are doing is -- and,
again, we are not talking about Category I PRAs. This part
of the standard we are talking about describes the
application, not the PRA. So, all we are doing is
recognizing that given applications may require PRA
subelements of varying capability, that for a particular
application, you may need a fairly robust treatment, in one
area of your PRA, and the way you have treated other
elements of your PRA may not be as important.
So, all we are doing is setting in place a
framework without committing the NRC staff to any judgments
with respect to a given application. All we are doing, if
we eliminate the examples here, is saying, for applications
that have the following categories, here is an appropriate
level of PRA.
Now, it is our intent that it is up to the NRC
staff to make the judgment of particular applications.
DR. BONACA: If you are leaving out the examples,
I thought I would agree with you. I have no problem at all.
The only issue here, in my eyes, was, by de facto, you had
established that the capabilities necessary to support
(a)(4) maintenance rule are less than the capability to
rank, you know, risk prioritizing and less than others.
MR. SIMARD: Yeah, no.
DR. BONACA: That is really a prevarication of the
process. The process --
CHAIRMAN APOSTOLAKIS: As long as the NRC staff
doesn't get its hands tied because of the OMB directive --
DR. KRESS: They are already standard.
CHAIRMAN APOSTOLAKIS: I'm sorry?
DR. KRESS: Go ahead.
CHAIRMAN APOSTOLAKIS: And they actually judge and
say, well, gee, that is what this says, but we really
believe you ought to do this and this and that to satisfy
(a)(4), then I don't have a problem. Yes?
MR. BERNSEN: George, I think this discussion was
very useful. We need to take it back and consider it,
because there is no way that this standard is going to put
out the concept that we are making decisions on where you
apply it in regulatory space. These were intended to be
examples of typical current usage. And apparently it is not
clear. And I think as the discussion proceeded, it isn't
clear. We need to reconsider that.
Now, what Karl presented yesterday in the workshop
describing the attributes of the different categories was
very useful. And, you know, it pointed out that, you know,
for the Category I, we are talking about cases wehre you are
using the PRA to support deterministic and you are not
changing licensing bases and things of this sort, and so on.
We need to go back and focus on that, and take
another look at this. And if these are not good examples of
current usage, we need to take them out. And we make it
clear that this is -- we are not prescribing them. The
standard does not prescribe them, that is done by the
regulator. We are sensitive to that, so that is a good
discussion.
CHAIRMAN APOSTOLAKIS: I think Section 1.5, pages
3 and 4, should be revisited with that point of view.
MR. BERNSEN: Right.
CHAIRMAN APOSTOLAKIS: You know, this is a very
sensitive issue. Maybe in the paragraph, the second
paragraph that talks about the boundaries between the
categories and so on, bring up the issue of degree of
confidence and so on.
MR. BERNSEN: Right.
CHAIRMAN APOSTOLAKIS: But the introductory
paragraph perhaps should make it clear as to what these
typical applications are intended to -- the message they are
intended to convey, and that in no way are binding somebody.
I don't know if you can say.
MR. SIMARD: Exactly.
CHAIRMAN APOSTOLAKIS: Because I think this
particular section, you know, is very critical in how the
categories will be viewed later on, because you don't want,
again, to have people say, gee, it was Category II, and the
OMB says you have follow it, and all of a sudden the NRC
staff is on the defensive why they are violating an OMB
directive and they don't follow a national standard, you
know.
DR. BONACA: The other interesting thing is that
one could contend that in order to evaluate two or three
different components simultaneously, you need quite an
advanced degree sophistication, you know, and so you want to
look at it, too. Okay. I just added that.
CHAIRMAN APOSTOLAKIS: Okay. Any other comments
and categories? Yes.
MR. FLEMING: I wanted to clarify one thing that
one of my colleagues on the project team wanted me to point
out, and there is a lot of confusion, I think, or
opportunities for confusion when one compares these three
categories in our standard to the categories that were
originally defined in the industry certification process.
These categories here refer to the industry
Categories II, III and IV, and whereas I in the industry
certification process was the IPE level. So, I just wanted
to make a clarification here, is that this Category I is
already raising the bar, I think to a significant extent
above what was expected for the IPEs. So, I just wanted to
clarify that point. Category II is further up the bar.
MR. BERNSEN: I don't know, we may revisit this
again, but it is appropriate to bring it up at this stage.
We felt that it was useful to have the three categories in
our standard because it does reflect current usage, and it
does recognize that there different grades for the various
elements and supporting requirements. And we are looking
for feedback on that, what your reaction is to that, because
it is an important concept of the standard.
CHAIRMAN APOSTOLAKIS: On this particular subject,
any comments from the NRC staff?
[No response.]
CHAIRMAN APOSTOLAKIS: Public?
[No response.]
CHAIRMAN APOSTOLAKIS: I guess -- Karl, do you
have any more viewgraphs?
MR. FLEMING: No, that is all.
CHAIRMAN APOSTOLAKIS: I have comments on the HRA
and data analysis. I don't know, Jack, do you have anything
on the accident quantification?
DR. BONACA: On the quantification, yeah.
CHAIRMAN APOSTOLAKIS: You do.
DR. BONACA: Yeah.
CHAIRMAN APOSTOLAKIS: So, I guess we can break
for lunch now and then pick up the specifics after that. We
have comments on specific sections. I don't think it will
take more than an hour or so.
DR. BONACA: No, I don't have extensive comments.
CHAIRMAN APOSTOLAKIS: Gerry.
MR. EISENBERG: I just wondered, the question Dr.
Bernsen raised is a more generic question.
CHAIRMAN APOSTOLAKIS: Yes.
MR. EISENBERG: Are we going to revisit that after
lunch?
CHAIRMAN APOSTOLAKIS: I thought we were done, but
at the end there will be -- we will go around the table so
that the members --
MR. BERNSEN: I guess my question is, is the
discussion clear on this? Have we had a reaction from you,
or are we still waiting? With regard to --
CHAIRMAN APOSTOLAKIS: Our reaction to the
categories?
MR. BERNSEN: Your consideration of the
acceptability of retaining the three categories and the
utility of it.
CHAIRMAN APOSTOLAKIS: Well, why don't we think
about it and maybe come back to it. But at the end I plan
to go around the table and maybe this is a question to which
we will have to.
MR. BERNSEN: Fine.
DR. BONACA: I think there is a lot of questions,
of course. And, you know, I want to say that still I can
see the strength of the high level requirement approach,
that is a real strength over the previous, because it is
structured, the top process, how you get into that.
CHAIRMAN APOSTOLAKIS: Well, the thing is the NRC
staff itself, in Reg. Guide 1.174, recognized that the
degree of sophistication of a PRA varies with the
application. That is why we have the shades of gray and
there is a discussion about sensitivities and model
uncertainties and so on, as you approach the boundaries.
So, people do recognize that not all PRAs have to
be, you know, the perfect PRAs for all applications.
What you are doing here is you are going one step
beyond that and you are actually trying to formalize that by
defining categories. And, you know, the implications and
consequences of this kind of thing is something that we are
all thinking about. And, again, my concern, as I expressed
earlier, is how are certain licensees going to use this in
light of the OMB directive. And if it is used to impose
certain constraints on the staff, then I think that would be
an unfortunate use of the standard.
So, let's come back to this at the end, around
2:00 or so, after we finish with the specific questions.
But this is certainly something that is extremely important.
Okay. So, we will come back at 1:00.
[Whereupon, at 12:05 p.m., the meeting was
recessed, to reconvene at 1:00 p.m., this same day.]. A F T E R N O O N S E S S I O N
[1:06 P.M.]
CHAIRMAN APOSTOLAKIS: Back in session. So, we
will discuss first, some of the items under Risk Assessment
Technical Requirements, and some of the specific questions,
and then perhaps go back to a general discussion of the
standards, and let's plan on finishing maybe like 2:15 or
2:30.
Okay, we've discussed already initiating events,
accident sequence analysis, success criteria. Any comments?
[No response.]
CHAIRMAN APOSTOLAKIS: System analysis. Success
criteria, I think Bill Shack may have some comments. Do we
know where he is?
MR. MARKLEY: Have you seen Dr. Shack?
MR. SIEBER: He had an appointment over lunch.
CHAIRMAN APOSTOLAKIS: He did? Well, that brings
us to Human Reliability Analysis, which is me.
And that takes us to page 76. Yes, that's where
it starts, right, 76.
[Pause.]
On page 78, under C, for pre-initiator HRA, it
says that the evaluation of errors in pre-initiator human
action shall be performed using a well defined process that
recognizes plant-specific nature of the human failure
events.
Are you referring to the Swain Guttman various
adjustment factors there?
MR. MROWCA: Bruce Mrowca, I'm a Project Team
Member and also from Baltimore Gas and Electric. I didn't
get to the section you were talking about.
CHAIRMAN APOSTOLAKIS: Page 78.
MR. MROWCA: 78.
CHAIRMAN APOSTOLAKIS: There's a table there on
high level requirements for human reliability analysis.
Under C, Quantification --
MR. MROWCA: Okay.
CHAIRMAN APOSTOLAKIS: Plant-specific nature, the
plant-specific nature of the human failure events, I was
wondering what that meant.
MR. MROWCA: This is for pre-initiating actions?
CHAIRMAN APOSTOLAKIS: Yes.
MR. MROWCA: My page numbers are different than
yours. That's what I'm struggling with.
MR. BERNSEN: Our page 76.
MR. WALL: Dr. Apostolakis, if I might offer a
small suggestion, if you use the index number on the
righthand column, it avoids the confusion about the page.
CHAIRMAN APOSTOLAKIS: Righthand column. How
about if I give you a table number, 4.4-5.
MR. WALL: The index number gives you the right --
MR. BERNSEN: But he's talking about the high
level requirements, Table 4.4-5, page 78 of mine. Is that
the same page for you?
MR. MROWCA: It's 76 for us.
CHAIRMAN APOSTOLAKIS: You dropped two pages, but
you're not going to tell us which ones. Quantification.
MR. MROWCA: That's the quantification of pre-
initiators?
CHAIRMAN APOSTOLAKIS: Yes.
MR. MROWCA: The intent was to reflect the plant
unique features of test calibration and maintenance, and
have a process that will identify those tests, maintenance,
and calibration activities that, one, need to be identified;
and, then whether they're proceduralized, and some method to
address the degree of proceduralization and the degree of
independent degree and checking that's going on in the
development of those actions.
It actually is not meant to endorse, again, a
particular methodology.
CHAIRMAN APOSTOLAKIS: The only methodology really
that is out there and people are using is the NRC Human
Reliability Handbook when it comes to pre-initiators. I
don't know of any other.
So, in this particular case, it doesn't really
matter. It's the post-initiator that is subject to.
MR. MROWCA: I've seen many ways to employ a
single methodology.
CHAIRMAN APOSTOLAKIS: Yes, because it has a lot
of discussion on performance factors, so I was wondering
whether you were referring to that.
MR. MROWCA: Well, the key thing I think we're
trying to say is that the two attributes that you need to
consider were proceduralization and independence, and having
a technique to reflect those attributes.
CHAIRMAN APOSTOLAKIS: Okay. On page 80, which is
78 for you, I suppose, supporting requirements, Table
4.4.5(b), under both Category II and III applications, you
have a parenthesis that says, i.e., latent.
Now, first of all, we said earlier this morning --
and I don't know if you were here -- that the definition
earlier was not quite accurate.
But this is not something that people do
routinely, I don't think. Wouldn't it be worthwhile to
explain what you mean by latent conditions and latent errors
somewhere?
I was pleasantly surprised to see it here, but I
don't know that most people will understand what you mean by
latent, unless they have worked in the field.
MR. MROWCA: Are you making a distinction between
post-initiators and latent errors?
CHAIRMAN APOSTOLAKIS: No, no; this applies to
both.
MR. MROWCA: Excuse me, pre-initiators and latent
errors, is what I meant to say. They are the same thing,
right, the way you read it.
CHAIRMAN APOSTOLAKIS: Oh, I see.
MR. MROWCA: Essentially, I think they're being
used interchangeably in the standard, for pre-initiators
being any action prior to the initiating event.
CHAIRMAN APOSTOLAKIS: Right, and you refer to
miscalibration, for example.
MR. MROWCA: Miscalibration, maintenance,
alignments.
CHAIRMAN APOSTOLAKIS: What I am saying is that in
his book, Jim Reason and others defined latent conditions in
a broader sense, so my mind went to that. I didn't go to a
specific human action that disables something before the
initiator.
For example, latent conditions may include things
like prioritizing something or giving it a low priority, so
even though the Agency is aware of second actions that must
be taken, they will take them sometime in the future, and
then something happens before the corrective action is
taken.
And a recent review by Idaho for the Staff here
identified many of those. They did a number of root cause
analyses and confirmed that.
So, the word, latent, means now something very
specific in the HRA community. And I think you should make
that clear that you are not referring only to the specific
action of miscalibration.
MR. SIMARD: Would you suggest a change then to
our definition of latent human error?
CHAIRMAN APOSTOLAKIS: There is a definition which
is very specific.
MR. SIMARD: Right.
CHAIRMAN APOSTOLAKIS: I suggested earlier this
morning to broaden it.
MR. SIMARD: Thank you.
CHAIRMAN APOSTOLAKIS: But I'm not sure that just
listing a definition will do it. Maybe some elaboration
somewhere here as to what latent conditions are would help.
I'm sure the Staff will be happy to give you the INEL study,
or at least the viewgraphs that were presented to us, and
that will give you an idea of where people are coming from.
MR. SIMARD: All right, thank you.
CHAIRMAN APOSTOLAKIS: One other comment on this
table: Oh, on page 82, again I was pleasantly surprised to
see under HR-C-4, assess the dependency of pre-initiator
human actions among multiple systems and trains, including
whether the work process itself introduces a mechanism for
dependency.
Well, I don't know of anyone besides me who
worries about work processes, so this was really very
pleasant to me. Is that something that you are doing and I
don't know about it?
MR. MROWCA: Well, I think maybe it's maybe the
interpretation of what work processes --
CHAIRMAN APOSTOLAKIS: The way the term is used at
the plants?
MR. MROWCA: We assess whether there are different
crews that look at redundant channels, for example, and
whether there are a couple of mechanisms that's possible
between those redundant channels. That's what we do at
Calvert Cliffs.
When we look at pre-initiators, we're trying to
actually identify mainly those things that do take out
redundant channels, because those were the ones of most
interest to us.
CHAIRMAN APOSTOLAKIS: So you are focusing on the
number of crews, perhaps?
MR. MROWCA: Well, not only the crews, but whether
the indications of the test will provide indication that it
was mis-done. For example, if there is adequate feedback to
the checker, that he will know that; whether the checker is
actually embedded into the proceduralized process or
performing the test or calibration or maintenance activity.
CHAIRMAN APOSTOLAKIS: In a maintenance work
process, for example, you will have maintenance request.
There is some prioritization, again, because there are too
many of those, and there is a scheduling step.
There are all those things which are before the
actual execution. What you are saying, I think you are
focusing on the execution itself and how many people are
involved and whether there is feedback.
MR. MROWCA: Well, maybe I misunderstand your
point, but most of the latent failures that I have been
concerned with are the ones that have occurred as a result
of maintenance being performed, not as a result of waiting
for maintenance to be performed.
And those are typically captured in the
unavailability conditions that you have in the plant, and
the length of time that they're in that condition. And so
you would see that information in the unavailability data or
failure data, more than you would see it in developing human
error probabilities.
CHAIRMAN APOSTOLAKIS: I think both the INEL work
and work at MIT have seen the prioritization process --
MR. MROWCA: As important.
CHAIRMAN APOSTOLAKIS: As important, yes. But
since the words, work process, really means something again
to many people, maybe you need to define it and explain in
what context you're bringing it up here.
MR. MROWCA: Okay.
CHAIRMAN APOSTOLAKIS: And I would encourage you
to do that. Page 88, I have something on 88.
Oh, well, yes. The issue of model uncertainty is
really not raised anywhere in this guide, not just the HRA.
And it seems to me if one does a Category III PRA, and if
one goes back again to 1.174, you realize that the issue of
model uncertainty in some instances may be important.
And the Staff now explains very clearly in the
Guide, that those cases where we're near the boundary,
increased management attention means that we're going to
look at sensitivity studies and so on, because the Staff
also recognizes that there is no accepted method for dealing
with model uncertainty, although expert judgment elicitation
techniques come into that.
And in the post-initiator -- I think we are there,
pre- and post-, so it applies to both -- in the post-
initiator HRA, the issue of model uncertainty, of course, is
very important, simply because there are many different
groups around the world that have developed their own
models, and we have SLIM MOD; we have ATHENA from here,
although ATHENA hasn't quantified anything yet.
And we have ACEP from 1150 and so on. So it seems
to me that if there is one place perhaps Level II PRA where
model uncertainty is really important, it's the post-
initiator HRA.
And yet the standard is silent, and what's --
there is also inconsistency between what you're doing here
and the level of detail and what you do later for common
cause failures and expert judgment, but especially common
cause failures where you actually list five methods for
handling common cause failures.
And yet in this chapter, you are completely silent
as to what methods exist out there. So that's a broader
comment for you that, again, we have this inconsistency that
we discussed also in the context of Bob Budnitz's --
MR. MROWCA: In those cases, do you have a
recommendation that you would prefer to see listed?
CHAIRMAN APOSTOLAKIS: I would stay away from
recommending methods, and I will recommend later in the
context of common cause failures, that --
MR. MROWCA: To remove them?
CHAIRMAN APOSTOLAKIS: -- they delete the five
models. But that's my personal view, and I'm not going to
defend it in depth.
MR. MROWCA: Okay.
CHAIRMAN APOSTOLAKIS: The reason is that, as you
know very well, these methods -- I mean, there is no single
method that is acceptable by the group of people, but we
have to recognize, it seems to me, the fact that there are
different models out there, and perhaps you are aware of
some benchmark exercises that were run by the ISPRA
laboratory of the European Union a number of years back.
And Mr. Fleming, in fact, participated in at least
two of those, I believe. One was a common cause failure.
MR. FLEMING: That's right.
CHAIRMAN APOSTOLAKIS: And they have nice tables.
I mean, the table on HRA is just mind-boggling. The same
team using different models get orders of magnitude
different results, and different teams, of course, using the
same model also get that.
So, it's all over. And that's not your problem,
but you have to recognize here, it seems to me, that there
is such significant model uncertainty, and that if the issue
of recovery actions, for example, is important, important
enough to attract increased management attention, some
handling, some sort of sensitivity analysis here, you know,
using perhaps two or three of these models or arguing that
the model we're going to use under these assumptions is
really the bounding one, we need some guidance on this, it
seems to me.
And I'm not saying it's an easy task, and I'm not
saying it was obvious and we missed it, but it strikes me
that's something that is unique to this particular subject,
and I think the fact that even this Agency in this era of
limited resources is spending considerable resources in
developing ATHENA, shows you that this is an area where
there is really a lot of activity.
So I would recommend that you do that, but I would
also recommend the higher-ups that they put something on
model uncertainty somewhere.
Yes?
MR. FLEMING: I don't think this will end up being
an impressive list, but there are a limited number of
examples of modeling uncertainty that's addressed in the
standard to some extent, and a couple that come to mind are
the seal LOCA.
CHAIRMAN APOSTOLAKIS: Exactly.
MR. FLEMING: The reactor coolant pump seal LOCA,
which is a modeling uncertainty issue, and the other one is
the modeling of the electric power recovery process.
CHAIRMAN APOSTOLAKIS: Right. This morning, I
don't know if you remember, but I suggested to Bob that he
list a few examples in the expert judgment section of Level
I issues that would require experts.
Now, again, why do they require experts? Because
of model uncertainty. So the whole thing ties in very
nicely, but I think somebody who can influence the whole
thing, can make -- must make sure that these things are
coordinated.
You wouldn't go to expert judgment in Level II
unless you had model uncertainty, right? If it's a
parameter issue, it's not a big deal.
Well, we can move on. So, my personal view is not
to list any models. You know how people -- at least some
people are. They might say, well, one of these is
acceptable and we'll do that.
If, on the other hand, in the context of model
uncertainty, you say, well, there is a number of models out
there, e.g., such and such, and then immediately you say one
would need to do something with those, then it's okay.
MR. FLEMING: Okay.
CHAIRMAN APOSTOLAKIS: Yes?
MR. FLEMING: Sid just reminded me of something
else to point out for the record, and that is when we get
down to quantification, there are specific requirements to
include modeling uncertainty.
CHAIRMAN APOSTOLAKIS: Wonderful.
MR. FLEMING: That's in a general sense, not a
specific sense.
CHAIRMAN APOSTOLAKIS: Yes, but this is an area
where guidance is needed, because, you know, I don't think
we disagree.
I have just a minor comment on 87, Table 4.4-5(f),
the third row, HR-F3, Category 3, include best estimate
time-dependent HEPs for initiation control, and the previous
one says best estimate, best estimate.
It is a crusade of mine to eliminate that
terminology from reactor safety. I realize it will be an
uphill battle when it comes to thermal hydraulics, fighting
an entrenched establishment.
But I have yet to find a book on probability and
statistics that tells me what a best estimate is, and I
think that is a pretty powerful argument. And I would
suggest that you, especially in Category III, avoid the
words, best estimate, and, in fact, there may be uncertainty
in that time.
I remember seeing a paper way back in the early
80s where people said, the author said that -- well, and I
think they are right -- that what really matters is not the
available time. It's what the operators perceive as the
available time.
They're not going to do thermal hydraulic
calculations there in 20 minutes, so that's what they
perceive as the available time.
Then, of course, the authors went out and asked
operators what the available time would be under a given
condition, expert judgment elicitation, and the available
time was overestimated systematically by a factor of three
or more.
This confirms another finding from the
psychologists that people tend to be optimistic when it
comes to their profession and their ability of handling
those situations.
So my suggestion is to eliminate, at least under
Category III, the words, best estimate, and maybe state
something to the effect that the time itself may be
uncertain.
MR. MROWCA: Okay.
CHAIRMAN APOSTOLAKIS: And then on page 90, this
is data. We're moving on to data. But overall, I thought
it was a pretty good high level description of the
requirements.
MR. MROWCA: There was a strong attempt to try to
stay away from methodologies and just stick to
characteristics that were important.
CHAIRMAN APOSTOLAKIS: Which is the theme of the
standard, right? Now, I don't know how some of these things
will affect the quantification, but that's a universal
problem.
I mean, I don't know whether we can really argue
that the quality of the written procedures is so high that I
should use rates lower than what another fellow is using,
but at least it makes people think about it.
Any other comments on HRA from the audience,
perhaps?
[No response.]
CHAIRMAN APOSTOLAKIS: No? Data analysis is the
next chapter, 4.4.6. On page 90 of mine, where it talks
about parameter estimation -- and this is repeated later. I
guess, Karl, you are the man here?
MR. SIMARD: That would be Ian Wall.
CHAIRMAN APOSTOLAKIS: Ian, I'm sorry. Okay, Ian
Wall.
Under Requirement C, Parameter Estimation, it says
uncertainty intervals should address key parameters. Why
intervals and not distribution? That's my page 90, Table
4.4-6.
MR. WALL: Which index number.
MR. BERNSEN: It's high level C.
CHAIRMAN APOSTOLAKIS: High level requirements for
data analysis, Table 4.4-6, or maybe the numbering of the
tables has changed.
MR. WALL: Let me say up front, Dr. Apostolakis,
that as penance for a lifetime of sins, I was assigned this
section, even though I lack expertise.
CHAIRMAN APOSTOLAKIS: Your's is not to blame,
it's just to come up with a better product.
MR. WALL: I was fortunate in having some
excellent help as facilitator, including Karl Fleming and
Stanley Levinson, and other consultants, Shobba Roa, who I
think you may know for PLG. So if I cannot answer your
questions, I will be happy to take notes of them and get an
answer later on.
CHAIRMAN APOSTOLAKIS: So, the question is really
why -- first of all, are we on the same page?
MR. WALL: We're on the same page.
CHAIRMAN APOSTOLAKIS: Uncertainty intervals shall
be addressed for key parameters. My question is, why not
distributions, because what are you going to do with the
intervals?
MR. FLEMING: I think I have an insight here.
This is a high level requirement which is phrased so that it
applies to all three categories. If you go down to the
detailed categories, you will see that we're insisting on
uncertainty quantification -- full uncertainty
quantification for Category III, but less -- you know, more
of a qualitative examination of uncertainties in Categories
I and II.
So the phraseology up at this level --
CHAIRMAN APOSTOLAKIS: For Category I, Karl, you
really don't ask for any uncertainty, so why would they come
up with an uncertainty?
MR. FLEMING: There are still requirements to
understand uncertainty from a qualitative perspective, even
in Category I, so the choice of words up at this level was
selected to be broad enough. It says considered.
It says uncertainty intervals shall be considered.
Down in Category III, you will see uncertainty distributions
have to be quantified.
CHAIRMAN APOSTOLAKIS: Why didn't you note that?
You confused me, and if you guys want to take action --
MR. FLEMING: I think the word, intervals, was the
problem.
CHAIRMAN APOSTOLAKIS: The word, intervals,
bothers me.
Then on page 94, alpha factor, multiple Greek
letter, basic parameter binomial. Why? This is the only
place where you do this.
Table 4.4-6(e), third row. Use one of the
following models for estimating CCF parameters, and it gives
a reference, which I bet is NUREG. And I don't know why all
of a sudden you decided to be so specific here and list
models.
I mean, I would rather do what the HRA folks did,
and say use a model from out there, but be careful and do
all these things that we're telling you.
As you know, I don't know that the binomial
failure rate model is one of the following models, at the
same level as a multiple Greek letter model. I would rather
go with multiple Greek letter model than the binomial
failure rate model.
And also, you know, now it begs the question, the
alpha factor model was developed to correct certain
statistical things in the multiple Greek letter model, and
yet we say you can go back and use the multiple Greek letter
model. I would take them out.
It would be consistent with the other chapters,
and you wouldn't get questions like this, the ones I'm
giving you now. Now you're endorsing models.
MR. FLEMING: If I might respond to that, I think
that, yes, I think it is true that there are some
inconsistencies across the elements in the position they
took with respect to methods.
There were a few example like an accident sequence
definition where we throw out some concepts like event
sequence diagrams, dependency matrices, and so forth, to
provide specific examples of a systematic method for a
certain task in the PRA.
But in each case, we made judgments. There is an
important distinction we need to make between when we
compare HRA to Common Cause, and that is that, in fact, one
of the conclusions of the ISPRA benchmark exercises on these
different elements was that the selection of the method, the
modeling method itself, was not fundamental to the nature of
the result.
It was more in how the model was applied and how
the data was screened and so forth. So the variability
across methodologies among all the ones that are mentioned
here, are not nearly as strong as they are in the HRA field.
But we have a tradeoff here. We can list methods
that we know are acceptable, and say justify alternatives,
or we can take this out and then replace it with the
recreation of these NUREGs that describe all the acceptable
characteristics that you have to have.
CHAIRMAN APOSTOLAKIS: If you took out the
binomial failure rate model, it wouldn't bother me.
MR. BERNSEN: George, that's a carryover from
Draft 10. It's a direct statement, quote, out of Draft 10,
but it's still a valid comment on your part. I'm not --
CHAIRMAN APOSTOLAKIS: Yes, I think you would have
to think about the issue of consistency from chapter to
chapter, but coming to the specific thing here, I mean,
sure, Karl has a point, people have spent a lot of effort
and resources on developing these. But the binomial failure
rate model, I'm not sure belongs in the same category as the
others.
As a matter of fact, we were told that in a
different context. But the San Onofre risk monitor uses a
multiple Greek letter model. So you really don't want to
say just use the alpha factor.
I agree with you, Karl, that some statistical
corrections really never made a big difference, which brings
me to the other point which I may have missed. But as you
know, a lot of the emphasis of the NRC work on this subject
over the last several years has been on building up a good
database, and then urging the user to screen all these
events.
I think, in fact, that goes back to you, Karl, in
the early days when you started this project.
To screen these past events as to their
applicability to the particular facility for which the PRA
is done. Is that emphasized anywhere here or have I missed
it? In any table here it says go to the data and actually
screen and be careful when you screen. I mean all it says
is usually a list of common cause probabilities. But the
probabilities come after a long investigation.
MR. FLEMING: For example, if you look at DA-B8.
CHAIRMAN APOSTOLAKIS: B8?
MR. EISENBERG: DA-B8.
MR. FLEMING: DA-B8, which is at the top of one of
the pages of Table 4.4-6B. It is probably page 94 in your
IEEE. The very top requirement in that.
MR. SIMARD: It is the one right before the
requirement.
MR. FLEMING: It is suggesting that for Category
III applications you have to do this plant-specific
screening as indicated in the NUREG. So, yes, it is
mentioned for Category III applications.
CHAIRMAN APOSTOLAKIS: Yes, supported by plant-
specific screening and mapping. All right. If that is good
enough, that is good enough. And, again, we have a
particular reference here.
MR. WALL: I would like to note, Dr. Apostolakis,
that the supporting requirement DA-B8, to which Karl has
referred, is actually responding to an ACRS comment on the
previous draft, your comment in one of your attachments.
So, we did listen last time and we will listen again.
CHAIRMAN APOSTOLAKIS: So, our promises are coming
back to haunt us, is that what you are saying Dr. Wall?
MR. FLEMING: In fact, we refer to this as the
Apostolakis requirement.
CHAIRMAN APOSTOLAKIS: Okay. Well, so I don't
know, is your inclination right now -- I know that you
cannot really commit the committee, but to eliminate all
four or keep the first three and delete the binomial failure
rate model?
MR. FLEMING: I think those are good comments that
we will certainly take very seriously.
CHAIRMAN APOSTOLAKIS: I would like to see a
little more emphasis on the screening. Just saying plant-
specific screening, maybe that is good enough, I don't know.
Well, I think I am done with the common cause
failures, unless -- no, there is more. There is more. I'm
sorry. There is data analysis. Yeah, that is part of data
analysis. I have more comments. Page 96, Table 4.4-6C.
Again, the caption talks about intervals but we have
discussed this.
On the righthand side there are four bullets. Are
we on the same table? I like this one, verify that the
Bayesian updating does not produce a posterior distribution
with a single bin histogram. That way you don't -- in fact,
I like it so much I think you should elaborate a little bit
more, because the original data specialization paper did
not, unfortunately, emphasize this enough, and people take
test data, for example, and you don't need very many of
those. If you have done a few Bayesian calculations, you
know that very quickly, the posterior distribution becomes
very narrow, and if the idea was to keep a tail for the
accidents, for the accident conditions, and the data come
only from tests, then you have a problem.
MR. FLEMING: That's right.
CHAIRMAN APOSTOLAKIS: And I think that was the
intent here, but I think perhaps only a few of us understand
this if you just read it. So --
MR. FLEMING: True.
CHAIRMAN APOSTOLAKIS: And then the fourth bullet
says, verify the reasonableness of the posterior
distribution mean value. I would say, verify the
reasonableness of the posterior distribution. Why just the
mean? I mean since you have done the work, you might as
well look at it, which is related to the previous comment.
MR. FLEMING: Good comment.
CHAIRMAN APOSTOLAKIS: And I think now I am done
with the data.
MR. WALL: Before you leave the data, Dr.
Apostolakis, I would like to just mention that you may react
to the fact how much smaller this section is than Rev. 10,
and I would like to provide some assurance to you that we
systematically went through Rev. 10 and took each of the
"shalls" from Rev. 10 and made sure it was handled in this
section as an action statement. And this section, I tried
also, in the square brackets on the lefthand side, to show
the paragraph in Rev. 10 from which that statement came.
CHAIRMAN APOSTOLAKIS: Good. Thank you. Well,
this is an area that is fairly mature now. The data has
subsided. So, unless someone else has a comment on data?
MR. FLEMING: George.
CHAIRMAN APOSTOLAKIS: Yes.
MR. FLEMING: I just wanted to -- Ian brought up a
very good point, it is something that I just wanted to bring
to your attention, the committee's attention, is that one of
the characteristics of Rev. 10 that we tried to address in
this rev. was that if you go back to Rev. 10 and develop a
frequency distribution of the number of pages of
requirements against the elements, there was a strong
feeling by many of us that it was out of balance. That Rev.
10 tended to write the most about the things that we have
the least concern about in PRA, system analysis, data
analysis. There were lots and lots of requirements in here,
and fewer and fewer requirements in there that are really
tough.
So, we tried to balance, come up with a better
distribution of the level of detail of the requirements to
reflect the importance of the PRA element, and that was part
of what Ian was trying to accomplish this.
CHAIRMAN APOSTOLAKIS: One last thing, the user
will, if they go the literature, which I am sure they will,
or the PRAs, will find terms such as state of knowledge
distribution, more recently, epistemic, alliatory.
Shouldn't these be in the glossary definitions?
MR. FLEMING: If we use them.
CHAIRMAN APOSTOLAKIS: And perhaps in the data
section, say a few words about the terminology. Because I
can see someone getting completely lost. Right now it says,
you know, develop a probability. In the definitions, I
didn't see it. But I think there is complete silence on
these things, and I think people will find it useful.
MR. FLEMING: In the quantification section, QUD2,
the ones we were referring to earlier, we do, in fact, use
alliatory and epistemic.
CHAIRMAN APOSTOLAKIS: Yeah, but in the data
section.
MR. FLEMING: Therefore, we should have a
definition.
CHAIRMAN APOSTOLAKIS: So, there are two places,
one is the definitions and the other in the data section.
But when we talk about uncertainty intervals for parameters
like failure rates, maybe you put a parenthesis and say, you
know, this is epistemic.
Dr. Shack, we skipped the success criteria
section. Do you have any comments?
DR. SHACK: No.
CHAIRMAN APOSTOLAKIS: It's okay?
DR. SHACK: It's okay.
CHAIRMAN APOSTOLAKIS: Fine.
DR. KRESS: I had one question on success
criteria, I guess. It is sort of the same question I had on
the fission products. They call for use of remediation
things in fission product, but this requires you have a
model for fission product release and transport, but there
is no standards, or no requirements related to what that is.
I had sort of the same question on success
criteria. It calls for using realistic thermal-hydraulic
analysis or whatever to determine the success criteria. But
that is about as far as it went. And I was a little bit
concerned, well, I can do an awful lot in perturbing the
results of a PRA by screwing around with success criteria.
But it seemed like we didn't talk much about how one
determines those success criteria and what are the standards
of the deterministic calculations or the other kind of
calculations that are used for those. And it was just a
comment. It just seemed --
CHAIRMAN APOSTOLAKIS: That is actually related
also to the available time.
DR. KRESS: And available time, yeah. Yeah.
CHAIRMAN APOSTOLAKIS: It all comes together.
DR. KRESS: Same -- available time is the same
sort of issue there. So, I thought it needed a little more
discussion or something about that in there.
CHAIRMAN APOSTOLAKIS: In fact, different success
criteria do have an impact, right?
MR. FLEMING: Absolutely.
CHAIRMAN APOSTOLAKIS: It is not like failure
rates and common cause failure.
I think Dr. Kress makes an important point. I
think you can tie that to the available time for HRA
purposes and all that, and maybe say a few words about the
uncertainties in the so-called deterministic analysis.
Right?
MR. FLEMING: Right. Good comment.
CHAIRMAN APOSTOLAKIS: Okay. Mario.
DR. BONACA: In many cases it makes enormous
difference.
DR. KRESS: Yeah, you can really, you can make
enormous differences with it.
DR. BONACA: Train auxiliary feedwater, the
reality, the best estimate, a cantilever, a three redundant.
Really, if you have a three redundant train, rather than two
redundant. So that just is an example, you can derive very
big differences just by the fact that you can prove that you
have three redundant system rather than two.
CHAIRMAN APOSTOLAKIS: Sure.
DR. BONACA: I had some comments on the
quantification here. One of them is I believe mostly
editorial, I brought it up yesterday, it is on page 109. I
am not sure it is only editorial. If you look at the high
level requirements, they are listed as B and C and they are
the same. But if you go into the supporting requirements,
page 113 and 115, they are different. So, I think probably,
C was meant to be something else.
MR. SIMARD: Oops, a mistake there.
CHAIRMAN APOSTOLAKIS: They are identical, yeah.
MR. SIMARD: The official answer to that is
"oops."
DR. BONACA: But the supporting requirements are
different in their meaning, so maybe you want a different
heading here under C.
MR. FLEMING: I believe that one of them was
intended to be Completeness and Scope, and the other one
supposed to be Completeness and Detail, but we will confirm
that. But it is obviously a failure of our document.
MR. SIMARD: Yeah. Obviously a failure of the
computer.
MR. FLEMING: Yes.
DR. BONACA: At the beginning I scratched it, and
then when I went back, and I said, oh, I can't scratch the
supporting requirement there.
MR. FLEMING: That is where I hit "do what I mean"
button and nothing happened.
DR. KRESS: I wish I had one of those buttons.
DR. BONACA: On page 110, under index, a couple of
the tables you have, under Category I, applications, two
applications and three, the only difference is one is
understanding, two is a sound understanding, and three is
sound understanding and quantification, which is fine. But
when you get down to the first supporting requirements, and
often after that, you use the word "estimate CDF and LERF,"
rather than calculate. Any reason here? I mean estimate
seems to me like such a more vague word that has a lot of
latitude to it, estimate things.
MR. FLEMING: I don't believe we had any profound
reasoning in the use of that term and it is probably
superfluous. It probably could be --
DR. BONACA: Well, I just, you know -- since you
are using it also for a Category III, I thought it may have
been something else at the beginning. Anyway, look at it,
for whatever that.
On page 111, the top of the page, now if you look
at Category I, again, it says it should have an
understanding of the impact of key uncertainties. And so I
don't know why the next supporting requirement, QUA9, you
know, the truncation requirement, would that apply also to
Category I? I mean you still want to truncate at a
sufficiently low enough value that the importance
calculations are understood. Even if you don't have the
same level of scrutiny or precision, you would want to have
that.
And I had the same comment somewhat on QUA11 here.
You are talking about, this is something that I have seen
analysts do always. I mean at some point they estimate what
they have lost in the truncation. It is a simple check, I
mean it is not that this is a major undertaking. My sense
is that they will be beneficial by just using the word
"consider," to me says that if you don't consider, you may
lose the understanding of the impact of key uncertainties.
So, my suggestion here is just that you review the
high level requirement against the supporting requirements
to see that there isn't a logical inconsistency. And that
goes down also to QUA12. I don't know, the screening value
seems to be pretty large.
CHAIRMAN APOSTOLAKIS: I have a minor editorial
comment off of there.
DR. BONACA: So that is -- I'm sorry.
CHAIRMAN APOSTOLAKIS: Understanding and
quantification of the impact of uncertainties, I would say
quantification of the uncertainties, to avoid. And that is
everywhere.
MR. FLEMING: Yes.
CHAIRMAN APOSTOLAKIS: Yes. I also got the
comment from Mr. Barton, he is also wondering whether people
will really understand the distinction between a realistic
quantification and straightforward quantification. He asks,
what is meant by realistic basis? And I think that is
related to what Dr. Bonaca just raised. He actually -- we
make a distinction here between an understanding and a sound
understanding.
These are very fine lines to walk --
DR. BONACA: That's why at the beginning I didn't
critique it.
I just went down into the supporting to see if
that, by that clarification --
CHAIRMAN APOSTOLAKIS: I really don't know what
you guys can do about it but I am just telling you what the
reaction of people who have not lived with this for the time
that you have lived with it is when they see that a
distinction between one category and the other is that here
you are modeling something but here you are doing a
realistic model. Here you understand something but here you
have a sound understanding.
If you can do something to make it clearer, I
don't know what you can do but this is a reaction, okay?
MR. FLEMING: Just a comment on that. I think
that you see to some extent a work in process along the way
from a point in time when we were trying to figure out the
logic for, an appropriate logic for differentiating
requirements across the three categories.
At one time it was oversimplified to say that in
Category 3 you shall do something, should, and may, so we
converted the actions statements --
CHAIRMAN APOSTOLAKIS: Right.
MR. FLEMING: -- and we don't mean unsound
understanding if you don't say sound, so I think this is
good feedback.
We need to go back and tighten that up.
CHAIRMAN APOSTOLAKIS: And that is why I will come
back to my comment yesterday and this morning that if you
tie these categories to the decisionmaking process and the
decisionmaking process is 1.174, and refer to the decision
criteria of 1.174, at least for me that makes it much
clearer as to what you mean by Category 3 and Category 2.
MR. FLEMING: Right.
CHAIRMAN APOSTOLAKIS: The Staff is on the record
saying that as you approach the boundaries things become
darker, increased management attention, but when you attract
increased management attention you better have a Category 3
analysis.
MR. FLEMING: That's right.
CHAIRMAN APOSTOLAKIS: That makes it clearer to
me.
MR. FLEMING: That's right.
CHAIRMAN APOSTOLAKIS: Otherwise, sound versus
unsound, realistic versus unrealistic -- it's a little
difficult.
It may be in the same context -- I mean I can see
the distinction between Category 2 and 3 in the context of
1.174. Category 1, I can't place it, so that is something I
am sure we will discuss again.
DR. KRESS: 50.59?
[Laughter.]
CHAIRMAN APOSTOLAKIS: 50.59 is always the answer.
DR. BONACA: I had one more comment here.
CHAIRMAN APOSTOLAKIS: One more comment, yes.
DR. BONACA: Which is general. That's all I've
got -- which is more of the use of the word "may" rather
than use like -- let me give you an example -- QUA-14 on
page 111.
Under Category 3 you are making it a requirement
to use the same truncation limit for solving each system in
the overall sequence CDF, because it is the proper approach.
When you come down to the other two categories you
are using the word "may" -- now if I had seen the word
"should" I would have said that's fine. It's a
recommendation. People don't have to follow it, but "may"
seems to be a little bit -- almost too loose.
CHAIRMAN APOSTOLAKIS: Too weak.
DR. BONACA: Too weak. I mean "may" -- yeah, I
may also take a walk.
[Laughter.]
CHAIRMAN APOSTOLAKIS: But should you though?
[Laughter.]
DR. BONACA: Just a suggestion though -- it seems
a little bit, you know, especially for Category 2 --
CHAIRMAN APOSTOLAKIS: Too wimpy.
DR. BONACA: Category 2 I would see that,
certainly I would want to see there a "should" and then if I
go to Category 1, I would almost I will put the word "may do
without using" by saying, you know, well, okay, but "may" I
couldn't understand what it meant.
I can understand the trouble you are going through
and I wouldn't want to be in your shoes, actually, going
from where you are going before you had all those "shall" --
but still I think it would help.
MR. FLEMING: Just to comment on that, it was our
intention but although not completely successful, in
preparation of this draft to use the word "may" only as a
permissive. You had several options. You could say you
could do it (a), (b), or (c), but there are some remnants
and this was one of them of leaving a "may" to be a somewhat
less than "should" -- which we don't intend to leave in
there.
MR. BERNSEN: We also don't want to use
"should" -- we have had too much trouble with that so we
will come up with something else.
DR. BONACA: Okay.
MR. WALL: It may be appropriate to point out that
in the process of going to action statements we translated
the "shalls" to a plain declarative verb, "shoulds" to a
consider -- so it would be use this -- the "shoulds" to
consider using something, and we left the "mays" as "mays"
so that was the general thing, the way we presented this.
Now in the process of other hands doing things,
some of those "consider usings" may have turned into "mays."
MR. FLEMING: That's right.
MR. BERNSEN: We'll fix it.
CHAIRMAN APOSTOLAKIS: Level II we have covered.
Process check is one little paragraph. Anybody
has a comment on the little paragraph?
DR. BONACA: I just want to ask would this process
check be different from a review that you normally have for
a standard calculation? What I mean is that there are very
specific requirements in QA whereby after performing a
calculation you have an independent reviewer who reviews it
and he also takes the responsibility to make a statement
that says I sign it, I have reviewed it for approach and
content and I agree or disagree on the following issues.
Would you see it differently? It doesn't sound
like -- it is kind of loose, a little bit here?
MR. BERNSEN: This is a bit of a special problem.
We have been discussing whether to incorporate things like
specific QA requirements and some other requirements like
that as by reference in the standard and decided not to do
that because it is really up to the user at this stage.
The general feeling of the project team was that
even though one might not apply an Appendix B process to
this, although I am not sure where it would be done,
parentheses, there was a need to do some level of
independent review but perhaps not as sophisticated as a
number of licensees have developed in their design control
program.
It is certainly permissible under Appendix B to do
any kind of independent review for design verification.
This is not intended to be something else other than that,
but it was trying to clarify that we wanted some level of
checking yet we didn't want to impose the formality, because
it is really up to the user in their program.
DR. BONACA: Once the user applies this tool for,
say, 1.174 application, doesn't it take a role, a regulatory
role, where there are certain requirements in quality
assurance, et cetera?
MR. BERNSEN: Probably, but that is another venue
right now.
DR. BONACA: I think we should note, however, that
it is not included.
MR. BERNSEN: Yes. We have done both things in
ASME standards. In some cases we have explicitly called for
some kind of QA and in other cases we have allowed for
different incorporation of QA requirements.
In some cases we have not been explicit.
We are finding our way. We are primarily
interested in the PRA here, and not configuration control,
not QA, not things of this nature.
Management systems -- we are not trying to address
those. We are talking about the techniques that need to be
applied to the PRA, so this is an attempt to recognize
something needs to be done, but it is still worth your
questions.
I think if it is not clear, we need to clarify it
some more.
DR. BONACA: When you go to Section 5, on PRA
configuration control and you say certain things that place
a burden on processes --
CHAIRMAN APOSTOLAKIS: Bob, do you have any
comments?
DR. UHRIG: This standard is different than any
other that I have had occasion to deal with in the sense
that a boiler code basically specifies that you have got
confidence that this is not going to blow up.
You use an ASME standard in the specification of
materials, in a system, you know what you are getting, and
if you don't get it you can sue.
[Laughter.]
DR. UHRIG: This is totally different and I have
had a little trouble grasping it. I think I have got a
pretty good feel.
I accept the categories you have got here and they
make sense and I have read through what you have tried to do
but I am bothered by George's comment about people using the
same system and getting totally different results.
Am I misquoting you?
CHAIRMAN APOSTOLAKIS: In some instances, no, you
are not. In some instances, comma, no, you are not.
DR. UHRIG: We are going to have a chaos out here
in the regulatory area if Utility A gets different results
from Utility B and they both have got essentially the same
plants and the -- I don't know how we are going to resolve
that.
CHAIRMAN APOSTOLAKIS: Well, it is not their job
to resolve it. Their job is to make sure that the Applicant
recognizes it and does something about it, but I don't think
the standard really should resolve issues of model
uncertainty that exist out there, so my comment was in the
spirit of make sure that the Licensee realizes that here
there is a real problem and if they have to do a Category 3
PRA they should expect comments from the NRC Staff.
DR. UHRIG: You have got the same problem
eliciting expert opinions. Two different sets of experts
will give you two different views.
CHAIRMAN APOSTOLAKIS: Yes.
DR. UHRIG: So it is probably inherent in the
process.
CHAIRMAN APOSTOLAKIS: Mario? I'm sorry. Karl,
you had a response?
MR. FLEMING: Yes. I think it is an excellent
observation. I think it reflects the state-of-the-art to
some extent, but I think the other examples that you
mentioned, buying materials, building a pressure vessel, and
so forth, it could be simplified in the sense that you could
talk about the state variables of something you could
measure in an objective sense, and the problem that we have
is that we are trying to do something that is a function of
not the state variables but our state of knowledge about the
state variables and the variability, the variability stems
from the fact that our states of knowledge are different.
CHAIRMAN APOSTOLAKIS: Not only that but let's not
forget that PRAs are extremely ambitious. They are supposed
to have everything that can go wrong at the plant. This is
really a huge task which necessarily leads to this
situation, so it is not a specific issue we are dealing with
and we don't have the parameters to measure it and so on.
I mean ideally it should be the model for the
plant and it is people and everybody. Necessarily then you
are led to this situation.
I think, Jack, have you been trying to say
something?
MR. SIEBER: Yes. I would just like to comment on
Dr. Uhrig's comment, and this is something that you and I
have talked about too.
When I reviewed the standard I reviewed it, I
thought it was good, and I understood what it was you were
doing, but I kept in mind some things that were said at a
meeting we had on January 27th where you were a presenter,
Mr. Fleming -- Perry.,
CHAIRMAN APOSTOLAKIS: That was the ACRS retreat
in Florida, where Mr. Fleming and Dr. Perry were invited
experts.
MR. SIEBER: Right, and a couple of the statements
that were made was typical industry PRA is too simple and
often incomplete and sometimes has low probabilities for
initiating events.
Another one was typical industry PRAs have
differing engineering assumptions related to equipment
performance and phenomenological analysis.
There were some conclusions out of that with CDF
and LERF, et cetera, are relative terms and the absolute
value compares from plant to plant is significantly
influenced by the engineering assumptions and when used to
evaluate a change in risk PRAs of varying quality may still
give valuable risk insights related to plant changes.
Then Mr. Perry said the current quality of typical
PRAs is not sufficient to move from risk-informed regulation
to risk-based regulation.
I think that everybody can at least intuitively
say all these factors are correct.
When I reviewed the standard I tried to review it
with the thought in mind is will the varying qualify from
plant to plant of PRAs and the different answers the
different practitioners would get analyzing the same plant
with the same initiating events as far as the absolute
value, will that be helped by this standard?
I came to the conclusion that with the exception
of the question of completeness, which is the only thing I
could pull out of here that was more or less addressed, that
that problem of inconsistency from one practitioner to
another would probably remain.
I don't know whether that is good or bad. From
the delta risk standpoint it makes no difference in my view.
On an absolute value, which is your Category 3 it does make
a difference, and I am not criticizing or suggesting that
you change anything but I would like to hear your comments
on my way of thinking about it.
MR. BERNSEN: Yes. My reaction is that the
introduction of this standard and application of it is going
to lead in time to more convergence.
MR. SIEBER: Okay.
MR. BERNSEN: I don't think that all by itself it
will get you there, but there is a lot of cross-pollination
built into this, if you will, through the peer review
process, through the questions and responses that we will
have to answer over the years, through the additional things
that we are going to have to generate to supplement the
standard.
This is just the beginning. I mean if you recall,
we were talking about it before, the original nuclear boiler
code was a vessel design code for the reactor vessel,
nothing more. Look how far this has evolved to respond to
the need.
I think that we are going to see this as the top
level that generates over a period of time a lot more
formalized and informalized guidance and examples and
probably training and cross-communication that is going to
bring things closer together in time.
It is not going to happen overnight but if we
don't begin to move in that direction, and starting with a
basis that people can use based on the kinds of standards,
kinds of PRAs they have, so we encourage them to begin to
use it in a logical way, control fashion, we are never going
to get there, so this is I think a major step in that
direction, but we have got a long way to go.
MR. SIEBER: Well, I see it as just one element.
There is the standard and you can publish the standard and
the Commission can endorse it, but equally important are two
other elements that are the peer review and certification
process to me is the one that will have the greatest
influence on standardizing methodology and in self-criticism
and criticism of bad or inadequate phenomenological
analysis.
The other element to that is the sourcebook or
paper that really combines what was contained in Version 10,
okay, as a way to say here are the standard methods that are
being used. These are the "how-tos" and make the real
standard, the obligatory part, and the other one just a
tutorial, so to speak, and so I see this as really a three-
pronged effort which one of those is the standard itself.
The second one is peer review and certification. The third
is publishing the "how-to" portion of it, and that at least
is the way I see it.
CHAIRMAN APOSTOLAKIS: But I do agree with you,
Jack, that if there is one contribution I am sure there is
more than one of this standard is in the area of
completeness, because it raises all the issues that the
practitioner has to think about.
MR. SIEBER: Once they are in the standard you
can't ignore them.
CHAIRMAN APOSTOLAKIS: That's right.
MR. SIEBER: Obviously you aren't complying and so
you have to give due consideration to all the factors that
are in the standard. From that standpoint it is a good
thing.
CHAIRMAN APOSTOLAKIS: In fact, given the state-
of-the-art, I would oppose a standard that went beyond that
and actually recommended methods because I don't think
that --
MR. SIEBER: And I agree.
CHAIRMAN APOSTOLAKIS: In many areas we are not
ready.
MR. SIEBER: Well, that is my global comment.
CHAIRMAN APOSTOLAKIS: Well, it seems to me that
Mr. Sieber has broadened the discussion, so maybe we can go
around the table and see what the members have to say in
terms of general comments, if members so desire, or we can
open up the discussion in terms of specific issues and have
an unstructured formal discussion --
[Laughter.]
CHAIRMAN APOSTOLAKIS: -- of the issue by issue.
We certainly have to revisit the issue of
categories because ASME is obviously very much interested in
knowing the subcommittee's views.
I don't hear any suggestions, so I will pick one
method. Why don't we go around the table. I think Jack
just --
MR. SIEBER: I already gave mine.
CHAIRMAN APOSTOLAKIS: -- did his piece. Tom?
DR. KRESS: Certainly. Thank you.
Well, a general comment. I was just a little
disappointed that the standard chose to limit itself to the
current definition of CDF and LERF. The reason I say that
is I think having limited itself to those it did a pretty
good job. I mean I really have no complaints, but I think
that NRC when they go in to risk inform the regulations they
are really going to be in the business of looking at fission
products and the frequency of their releases and controls on
those and the standard comes up short on discussing fission
products.
I think more will be needed later by NRC or
somewhere on the standards associated with that, so that is
just a statement saying as far as it went I think you did a
good job. I think it left out an important part.
You always have to limit what you do, especially
if you have got limited time and resources so I think it is
appropriate to limit it but I was a little disappointed in
that.
I do think the main issue with the limited set is
going to be the categories and not so much that it is
appropriate to have different categories for different
applications. I think it is. I think you almost have to
buy off on that presumption.
The problem is going to be how to be sure you are
fitting the right application in the right category and I
think the guidance on how to do that might be a little too
loose, that I can fit things into categories one way or the
other by some small assumptions.
I think there might be some thoughts about saying
if you are unclear about what category to use, use the more
stringent one. I really don't see that admonition in there.
I also share George's view that the categories
might be tied to some extent to how well you need to know
the answer and that is either in terms of a predetermined
confidence level you need in the CDF and LERF, which may or
may not be something a standard ought to deal with. That is
something that NRC has to come up with, but then if they
came up with the confidence level they need, how do they go
to these categories and say this category will give me this
confidence level?
That connection is not made explicitly and I think
George's suggestion that looking at it in terms of the 1.174
may be a way to make that link and I did kind of like that
suggestion, George.
I thought the definition of LERF that allowed site
specific flexibility probably goes against the intention of
the definition of LERF in the first place in that it ought
to be made independent of the site.
I am not quite sure of that one yet. I have to
think about it awhile but that was my first reaction to
that.
I thought the requirement to have a full
uncertainty analysis for Category 3 needed a little bit more
expansion, and the reason is -- I share George's views -- I
can't see any plant-specific PRA doing a full uncertainty of
the NUREG-1150 type. What they will do is do the Monte Carlo
propagation which is easily done, but I think there's some
guidance needed on how you deal with the knowledge
uncertainty. What do you call it, George? Is that the
epistemic? I didn't see real good guidance on how you deal
with that in here, because I don't think it is going to come
out of a routine uncertainty analysis, so I thought that was
a missing element on how to deal with it first, particularly
for Category 3.
That is basically all the general comments I have
right now, besides the specific ones I had before.
CHAIRMAN APOSTOLAKIS: Mike, do you want to say
anything?
MR. MARKLEY: Yes, I have just got one slightly
technical and the other two just a formality.
I guess I am a little bit uncomfortable when I am
told that something is intended for a general purpose and
not really for regulatory purposes and then something like
the maintenance rule A4 is partitioned into Category 1. I
know you are going to go look at all that stuff.
The other two things were just that the member
comments, I just wanted to reinforce that. They are just
their own views. They do not represent the subcommittee's
or the full committee's, and I wanted to mentioned the
schedule for the full committee, and that is from 1:15 to
3:15 p.m. on Wednesday, July 12th.
CHAIRMAN APOSTOLAKIS: In fact, we should talk
about it before adjourn today as to maybe recommendations
that the members might have, what you should address,
because it is only an hour and a half, right?
MR. MARKLEY: It's two hours.
CHAIRMAN APOSTOLAKIS: Two hours, and points to
focus on -- let's not forget we should do that before we --
we usually do it with the Staff, because we don't want to
repeat everything verbatim that was done today.
Maybe you will have a chance to think about some
of our comments today and respond to the extent possible.
Dr. Bonaca?
DR. BONACA: I gave all the comments before, but
just to summarize what I view as important in addition to
the points already made by the members.
One is again we talked about the view,
characterization of Category 1 for maintenance rule and I
meant to say this morning the fact that some part of the
guidance and presentation as been pointed out in Category 1
is primarily decisions based on deterministic analysis
supplemented with basic insights.
I am not sure that the maintenance rule right now
says that as far as the role of risk information to make
decisions so that adds another thing, saying that this point
really has to be resolved.
There is an inconsistence.
Somewhere else also there is some inference that
the PRA -- for example under Category 1, page 3, it says PRA
applications are not expected to impact safety-related SSCs,
but you are pulling them out of service. There is an issue
there.
The second is the point I made this morning and I
still believe it is important. The chart which you are
presenting there, which is important, because visually it
helps to understand, has the fundamental presumption that
one can build a model to fit a need, and that is simplistic
in PRA.
A complete PRA at a plant is a massive model that
contains all kinds of information and my suggestion would be
simply that in the text somewhere you can explain that. The
reason or the intent of doing that is only, the intent is to
say that if the application is limited enough and clear
enough then it can be used for the purpose, but there is
some warning there that says don't -- and this is really
with good intent.
You may have a very low capability that begins to
do some, for example, subtle electrical changes, in some
support systems which are nonquality related, and yet they
cascade into dependencies which are important.
The third point I would like to make is I
recognize that the probably there isn't any PRA out there
that meets strictly the minimum requirements of Category 1
but the standard allows it. I don't know how we go
around --
CHAIRMAN APOSTOLAKIS: Say that again. There is no
PRA that what?
DR. BONACA: That meets only the minimum
requirements of Category 1.
CHAIRMAN APOSTOLAKIS: I thought that Karl said
that Category 1 in fact is higher level than the IPEs.
Isn't that inconsistent with this?
DR. SHACK: It's the intention.
MR. FLEMING: Again I think the distinction needs
to be broken down at the subelement level, and so we are
looking at the individual elements and subelements of the
PRA. There are some out there that are Category 0, 1, 2, 3
4, so --
CHAIRMAN APOSTOLAKIS: I see.
MR. FLEMING: -- so no full PRA is only Category
1. It is a mixed bag.
DR. BONACA: The message here is only -- I don't
want to interfere in the process. I only want to make sure
that as a committee you can review that and look at Category
1 and ask yourself does it provide you a very low standard
and do you want to support it, yes or maybe no.
It may be that you feel comfortable with that and
I trust that the committee can do that. I just say that if
I were in your shoes I would do that, just a simple
verification.
The last thing I would like to add is only that we
did not discuss the issue of peer review.
CHAIRMAN APOSTOLAKIS: Yes. Today we didn't.
DR. BONACA: Yes. But, you know, when I look at
some of the qualifications for example of the PRA I don't
understand exactly what kind of latitude you are allowing
the PRA peer review team qualification, because here it says
somebody who is knowledgeable in the requirements in the
standard for the area of review, which means we are all
knowledgeable enough right now with the standard.
Have the most experience performing PRA
activities relevant to the area -- it doesn't say he has
performed PRA. It says somebody who is doing some, you
know, evaluations, and have collective knowledge of the
plant design, containment design and plant operation.
Now these general characteristics, I don't believe
that they are capable -- how capable the members will be of
performing a true independent and thorough evaluation.
CHAIRMAN APOSTOLAKIS: It will come down to what
the Staff has. The Staff sees two or three of those that
are shallow the whole process will die.
A comment from somewhere?
MR. HILL: Just a little bit of a response to your
concern. Our original -- as you probably saw on Rev. 10 --
we used timing requirements, so many years experience, et
cetera, in various areas, but the point was made that you
could have that many years' experience and still not know
what you are doing, so that doesn't necessarily qualify
somebody.
CHAIRMAN APOSTOLAKIS: That's right.
MR. HILL: And it is difficult to tap into the
brain of the reviewer and say how much do you really know.
We chose these kinds of statements to be able to say that
they need to be able to cover all the ground of the NSSS,
the containment type, the operations, et cetera, and leaving
it somewhat on the judgment of the team leader.
That person has to make sure that they have the
right set of skills and talents available to perform their
review.
So, yes, it is somewhat vague and somewhat
general, but I am not sure how you can nail it down because
every time you try to nail it down with some specific
somebody can say, well, that specific doesn't prove
capability.
DR. BONACA: The reason why I am raising this
issue is because this is a unique area where the work done
for most PRA in the country have been done by a few
specialists and then put in the hands of the utilities.
From my experience many of these do not have the expertise
internally. They have expertise to tinker with some change
inside, but they don't understand oftentimes some of the
real subtle issues that were addressed by the professionals
who built the PRA. That is the only reason why I raised the
question.
MR. HILL: And you had another comment about why
doesn't it say performing requirements of having performed
PRA.
We did have those kinds of words in there but we
came to the rapid conclusion that we don't have people
performing PRAs anymore. We have people updating PRAs
because the PRAs already exist and there probably won't be
any more performed.
If we limit it to that, five years from now with
the career paths we won't have anybody available.
DR. BONACA: And I think, I was talking to some
utility guys last week and they were talking about how to
build a PRA capability at their facilities, and every single
one of them wanted an experienced systems engineer who would
be willing to learn PRA methods, and today we are talking
about experienced people doing this, experienced people -- I
don't think college graduates will find a job in the nuclear
business anymore, because all the jobs are for experienced
people.
[Laughter.]
DR. KRESS: Of course.
CHAIRMAN APOSTOLAKIS: I don't know what they have
to do to enter the field.
DR. BONACA: Still this is a unique issue because
I mean --
CHAIRMAN APOSTOLAKIS: It is. It is, but it
ultimately comes to the guys behind us. If they start
rejecting the quality of PRAs that have undergone the PRA
review, then there is a problem, because I am sure that they
will at the beginning at least review, themselves, the
products.
DR. BONACA: They are not rejecting thermal
hydraulic codes.
CHAIRMAN APOSTOLAKIS: What?
DR. BONACA: But that is a different issue.
[Laughter.]
CHAIRMAN APOSTOLAKIS: Thermal hydraulics isn't
different. Thermal hydraulics came from the fountain.
DR. KRESS: It was handed down from on high.
George, I think a good graduate student that
specializes in PRA at the right institution with the right
teacher could probably qualify as being experienced.
CHAIRMAN APOSTOLAKIS: I don't think so, Tom, but
very kind of you. Dr. Uhrig?
DR. UHRIG: The only thing I have to add is
related to a comment you made, concern about this OMB
regulation.
CHAIRMAN APOSTOLAKIS: Yes.
DR. UHRIG: I don't think that is a real problem.
I think if NRC -- I think it has the right to reject or
modify specific parts of any of the codes and having been on
the other side of the fence a time or two, if NRC wants to
do something they usually get it done.
CHAIRMAN APOSTOLAKIS: Still, though --
DR. UHRIG: Philosophically it is a concern but --
DR. KRESS: Well, they have gotten themselves in
that trap with the backfit rule.
DR. UHRIG: What?
DR. KRESS: They are certainly gotten themselves
in that trap with the backfit rule.
DR. UHRIG: Yes, they have. What I alluded to was
before the backfit rule came into effect. That's it.
CHAIRMAN APOSTOLAKIS: Okay, thank you. Bill?
DR. SHACK: I think it is a very interesting
attempt -- I think there is a lot of good, useful
information on the categorization in your viewgraphs and in
1-5 I think you beat up enough on that already today, but I
think you really do have the material to make a useful
approach to the categorization in the viewgraphs.
CHAIRMAN APOSTOLAKIS: Don't worry about
limitations of space when it comes to 1.5, okay? Take as
much as you want.
[Laughter.]
MR. BERNSEN: Well, I am going to go around our
side too. One of the points that was made I guess by Tom or
somebody with regard to the completeness of this standard,
as we mentioned before, this is the first effort.
You are probably aware of the fact that ANS is
writing some parts to the overall PRA and the low power
shutdown, external events.
We have had on our plate, work assignment, to look
at what is needed in the future, and in fact Karl graciously
agreed I still think to lead our little task group to define
what we should be doing in the future, so this is not the
end of the line in the process by any means. It is the
first step.
We recognize -- I think all of us -- that you need
to go further in terms of detail, in terms of guidance, in
terms of expansion of scope and things of this sort and that
will be done and we will work in a coordinated fashion with
ANS in doing that.
DR. KRESS: Will this be like other ASME standards
that may get updated?
MR. BERNSEN: Yes. It's intended -- this is a
living document.
DR. KRESS: Living document?
MR. BERNSEN: That we have done with our O&M, with
our code sections, with our QA and with all the other
standards. They are living documents. They are maintained.
I was going to say, Bob, you know, think of this
standard more in terms of a QA program standard, which
really didn't give prescriptive requirements on how you do
things either.
This is not the first step. It is somewhere in
the middle between them but we have had to deal with
different approaches for standards, but as I say, the main
thing to keep in mind is this is the first step. It is
going to be maintained. It is going to be interpreted.
I am sensitive to the concern whether or not --
the Staff certainly can accept or reject pieces and parts of
the standard. It makes it more difficult for them to do
that if we have something in the standard that says this is
the way we intended it to be used, so we have got to be
careful that we don't necessarily lead the pack when we
should be following. We will think about that.
DR. KRESS: Okay.
MR. FLEMING: I just wanted to make one final
comment about the categories that I don't think we had a
chance to bring up is that there is I think a quantum leap
in improvement when we went from one line in the sand to the
idea of multiple categories, because what we were trying to
avoid is by having one line in the sand is the unfortunate
consequence of having everybody expected to get to that line
and stop, so the idea of having, recognizing the three
categories, especially Category 3, also points a direction
for future enhancement of the technology.
We wanted to try to avoid just this idea of
meeting the requirement -- what do I have to do to meet the
requirement, as opposed to what we need to do to advance
this technology so we can make better decisions.
To echo something Sid said in response to Jack's
comments earlier, I do believe that already the
certification process is already helping this problem of
variability in the PSA results and I also agree with what
you say. It can't be done with any one leg of the stool.
MR. SIEBER: Right.
MR. FLEMING: It is all three of these -- the
certification process, the standard and the methodology
enhancements have to be working together.
CHAIRMAN APOSTOLAKIS: Gerry?
MR. EISENBERG: I am going to hand off to Ron
first.
CHAIRMAN APOSTOLAKIS: Sure. You are not
obligated to speak.
MR. SIMARD: All the good comments have been
taken.
I would just like to thank you on behalf of the
project team for some good discussions, some constructive
suggestions over the past day and a half.
I think I have heard that we have struck a
reasonable approach by trying to approximate the spectrum of
possible applications with these three categories, have
certainly gotten the signal that we need to do a better job
of characterizing the attributes of these categories, and
appreciate the fact that you gave us specific suggestions
that we can work on, so I just want to thank you.
CHAIRMAN APOSTOLAKIS: You are welcome. Ian,
would you like to say anything? No?
Before we adjourn though, I think there are two
points to be brought up.
As a prelude, I always find that sometimes being a
reviewer gives you a certain perspective that is not always
right, so I always learn when I have to defend my research
contracts at MIT before other people who are reviewing me.
I get upset a little bit at the beginning when they dare ask
questions, but then after awhile I realize that this is the
name of the game, so imagine I sitting over there and I act
accordingly.
Now why is that relevant to this?
Well, there was a suggestion made yesterday at the
workshop which I thought was very good. The suggestion was
that the NRC Staff apply this standard to its own work, and
what came to mind was the SPAR models.
This committee recommended in the recent past to
the EDO that the SPAR models be subjected to peer review.
The response from the EDO was no, we have had enough peer
review and they have been used by some Sandia folks -- that
is good enough.
It seems to me that we should come back to this
and if the committee agrees, of course, we should come back
to it and I think it will be a healthy exercise for the NRC
Staff to use this approach and maybe try to categorize SPAR
models and what they can do -- well, I think that will be a
very healthy exercise.
DR. BONACA: We will have to develop that.
CHAIRMAN APOSTOLAKIS: If we can demand perfection
from others --
DR. BONACA: We have to develop a new category
then because --
CHAIRMAN APOSTOLAKIS: What?
DR. BONACA: I will not mention it.
[Laughter.]
CHAIRMAN APOSTOLAKIS: But I mean the SPAR models
eventually will be, unless I am mistaken, will be the plant-
specific PRAs that the Staff will be using not to make
decisions but as a major input to their decisionmaking
process.
DR. KRESS: That's right.
CHAIRMAN APOSTOLAKIS: And I don't see why the
SPAR models cannot be subjected to this particular process.
Yes? We get a smile from the Staff. Are we
getting anything more? Oh, there you are.
MR. CHEOK: By default I'm it. This is Mike Cheok
from the Staff.
I guess the SPAR models will be used as an initial
stepping stone into whether something is risk significant or
not. All it is is to tell us if something needs to be
looked at some more, and if something needs to be looked at
some more, we will look at the licensees for more specific
information.
CHAIRMAN APOSTOLAKIS: Well, I guess you just told
us that the SPAR cannot be Category 3.
[Laughter.]
CHAIRMAN APOSTOLAKIS: Now the question is are
they Category 1 or Category 2?
MR. CHEOK: In my opinion, probably not Category
1.
CHAIRMAN APOSTOLAKIS: 1.5 perhaps.
MR. CHEOK: It is probably below a Category 1.
DR. KRESS: Category 1.9.
CHAIRMAN APOSTOLAKIS: Below Category 1?
MR. CHEOK: Yes.
CHAIRMAN APOSTOLAKIS: Well, they just did it.
But we may find other examples.
MR. CHEOK: That's right.
CHAIRMAN APOSTOLAKIS: Thank you very much,
anyway, for the comment. That is your expert opinion. Give
it to Budnitz. He will give it to us.
The other one is do the members have any
suggestions regarding the July meeting? Should, for
example, Mr. Bernsen and Mr. Fleming and Mr. Simard come
here with the same presentations or should they modify them
a little bit?
MR. SIEBER: Condense them.
CHAIRMAN APOSTOLAKIS: I mean we can't tell you
what to do. I'm sorry. I am using the word "should" -- you
are not Staff.
MR. BERNSEN: "Should" is a recommendation.
CHAIRMAN APOSTOLAKIS: Recommendation, okay, or
"shall" they -- they should consider. They should consider.
DR. KRESS: I think they ought to consider the
same stuff, only condensed a bit and maybe focus a little
bit on the categorization process.
CHAIRMAN APOSTOLAKIS: I would agree with that.
I mean if you could -- I don't know how much time
you have until then, but maybe take the major comments that
were made today and give us some preliminary reaction?
DR. KRESS: Yes.
CHAIRMAN APOSTOLAKIS: That will bring the
committee up to speed. I don't know if you realize this is
not a paying job, for some of you anyway, so it may not be
enough time, but if you could address some of these comments
or what you thought was something --
MR. BERNSEN: We can do that. Of course we will
not have had all the comments. We won't have a project team
meeting, so again, just as I said in this meeting where we
are representing ourselves as knowledgeable --
CHAIRMAN APOSTOLAKIS: Sure.
MR. BERNSEN: -- committee members and project
team members, we certainly will be able to I think identify
some of the issues you have raised and the fact that we are
going to take them under advisement in some possible ways to
the extent we can.
CHAIRMAN APOSTOLAKIS: For example, I mean when,
Karl, you presented the categories, you might modify your
viewgraphs a little perhaps to reflect some of the things
that you accept and say this is what eventually the document
will say I think that will promote better understanding.
Again, you don't have to do this. These are
individual comments.
MR. MARKLEY: And because we had four members who
didn't attend I think it is unavoidable to have the
overview.
CHAIRMAN APOSTOLAKIS: We will have the overview.
MR. MARKLEY: Short and brief and concise as you
can, but the issues are really the important points.
CHAIRMAN APOSTOLAKIS: And judging from the
experience of yesterday and today, the question of how do
you really define the categories is there, so if they feel
they have gotten any useful comments today, then maybe they
should do it.
MR. EISENBERG: You mentioned there is a two-hour
window for that. How much of that window is the
subcommittee presentation?
DR. KRESS: Normally --
MR. EISENBERG: Go ahead.
MR. MARKLEY: I'm sorry, Tom. Normally it is
about five to ten minutes, just introductory to introduce
you and then you have the majority of the rest of the --
CHAIRMAN APOSTOLAKIS: Well, I can go over the
major points.
DR. KRESS: Normally though, when you have two
hours you ought to count on about an hour of that is yours.
The rest of it is interruptions from us.
MR. EISENBERG: I understand -- just to know what
the ratio was.
DR. BONACA: I think they have a very good summary
presentation. If you went through that and if you just
simply acknowledged some of the questions you got, to
anticipate those --
CHAIRMAN APOSTOLAKIS: I will try to sketch some
of the major points in my introduction, okay.
DR. BONACA: -- so that we avoid having to jump in
again and again on the same issues, and otherwise I think
the summary presentation was good.
CHAIRMAN APOSTOLAKIS: And the nature of that
meeting is not to go into details and say on line this you
said that and so on, but we will write a letter, right,
addressed to the EDO?
Does anyone have any comments or questions from
the people around the table or others?
Hearing none --
DR. BONACA: I want to say we had a lot of
comments, a lot of criticism, et cetera. Again, you know,
at least personally -- I don't want to leave a perspective
that I don't think that there hasn't been progress since
last year. I think there has been progress and that is just
my personal view and that you people should be more than
encouraged for what you are going through.
I mean you are sifting through not only our
comments but so many others and so that is what I wanted to
say.
DR. KRESS: Yes. I second that. I think you are
on the right track with this as the kind of standard we need
to come up with. I certainly want to thank you guys.
MR. EISENBERG: Thank you very much.
CHAIRMAN APOSTOLAKIS: It's a very difficult job
trying to draw the line between various applications and so
on, and a lot of it is subjective, as we discussed, so we do
realize that you have a difficult job on your hands.
The comments are offered, you know, in a
constructive spirit and hopefully they will improve the
product, because there is a need out there, even
internationally. I am being asked by a lot of people when I
travel, especially people who are not in the business, and
they are surprised that there is no standard for doing this
new thing, so I appreciate your coming here and being
patient with us and thank you very much.
I think it was a very good meeting. We all
learned something, and on that note this meeting is
adjourned.
[Whereupon, at 2:44 p.m., the meeting was
concluded.]