U.S. Department of Justice Seal

AFFIDAVIT OF MICHAEL J. FRIDUSS



On behalf of the

Antitrust Division
U.S. Department of Justice


May 20, 1997





I.
PROFESSIONAL BACKGROUND

  1. My name is Michael J. Friduss. My business address is 1555 Museum Drive, Highland Park, IL 60035. I am an independent consultant working with CA Hempfling & Associates under contract with the Antitrust Division of the Department of Justice.

  2. I received a Bachelor of Science degree in industrial engineering from the Illinois Institute of Technology in 1964, and a Masters degree in Management from Northwestern University in 1971.

  3. I began my telecommunications career in 1964 as a Management Assistant for Illinois Bell Telephone Company ("Illinois Bell"). In this capacity, I filled a variety of non management and management positions designed to familiarize me with all departments of the company.

  4. From 1966 to 1969, I was a Manager in Illinois Bell's Plant Department. In this capacity, I supervised installation or repair operations in three different territories on the South side of Chicago.

  5. In 1969, I was promoted to District Engineering Manager, responsible for the engineering and design of outside plant, also on Chicago's South side. In 1970, I was appointed District Manager-Outside Plant Engineering Staff for Chicago Operations, responsible for methods and procedures and approval of major

  6. outside plant capital expenditures. In 1971, I was appointed District Plant Manager, responsible for installation and repair activities in Chicago's Hyde Park area. During my tenure in Hyde Park, I also headed up an Operations Review team that assessed the quality and cost performance of each district in Chicago Operations.

  7. I was promoted to Division Manager-Corporate Planning at AT&T in New York in 1973 and served through 1975. In this capacity, I headed a small group responsible for the study of the telecommunications interexchange industry at that time and what AT&T's future strategy should be in that segment of the industry.

  8. In 1975, I returned to Illinois Bell as Division Plant Manager, responsible for installation and repair in the South suburban area. In 1978, I was named Division Manager-Corporate Planning for the company, responsible for Illinois Bell's planning and operations budgeting, including operations planning for the implementation of the FCC's Computer Inquiry II and divestiture.

  9. In 1983, I was promoted to General Manager-Distribution Services, responsible for Illinois Bell's outside operations, construction, and engineering. In this capacity, I supervised 7,000 employees and a budget of $500 million.

  10. In 1986, I was promoted to Vice President-Personnel and Support Services for Michigan Bell and in 1989 was named Vice President-Customer Sales and Service for the same company. In

  11. the latter role, I was chief operating officer of the company and a member of the Board of Directors, with responsibility for operations and sales, including 11,000 employees and expenditures in excess of $1 billion.

  12. In 1992, I returned to Ameritech Services as Vice President-Customer Service and Information Technology, responsible for the strategic and tactical direction of Ameritech's customer service and operations, as well as planning, building, and maintaining high quality and efficient computer

  13. In late 1993, I formed MJ Friduss & Associates, consultants to the telecommunications industry. Our clients are carriers, primarily current and new local service providers, and small to medium sized companies that provide hardware, software, and operating systems to those service providers. We are currently working with a number of firms in the areas of strategic planning, marketing, operations, customer services, and supplier management.

  14. Additionally, I am Editor of the Friduss Report, a newsletter focused on carrier procurement processes.





    II.
    SCOPE OF ASSIGNMENT

  15. I have been asked by the Antitrust Division of the U.S. Department of Justice for my opinion regarding the appropriateness and comprehensiveness of the performance measures Southwestern Bell (SWBT) proposes to provide to competitors and regulators. In particular, I have been asked whether these performance measures will reasonably depict the performance of wholesale functions SWBT is obligated to perform pursuant to the competitive checklist of section 271 of the 1996 Act, and whether such measures will enable competitors and regulators to determine both the adequacy of SWBT's performance and the parity of such performance when compared to SWBT's retail operations.

  16. The primary source upon which I relied for my analysis is SWBT's Section 271 application for Oklahoma. I generally reviewed the application for any discussion of performance measures. Additionally, I have reviewed:

    • Oklahoma Corporation Commission's Operating and Maintenance Requirements pertaining to Southwestern Bell.

    • The FCC's Quality of Service report, which summarizes quality of service based on data submitted by the BOCs, GTE and Sprint.

    • SWBT's Statement of Generally Available Terms and Conditions ("SGAT") before the Corporation Commission of the State of Oklahoma to provide interLATA telephone service in Oklahoma.

    • Testimony before the Corporation Commission of the State of Oklahoma related to Southwestern Bell's application for full interLATA competition in Oklahoma.

    • The Telecommunications Act of 1996.

    • Interconnection Agreements between SWBT and Competitive Local Exchange Carriers ("CLECs") in Oklahoma.

    • SWBT's Interconnection Agreement with AT&T in Texas.

    • Performance Measures proposed by other BOCs as well as proposals by several CLECs.

    • Letters from SWBT to the Department of Justice regarding the performance measures on which SWBT proposes to report.

    • Comments SWBT filed with the FCC related to section 272 of the 1996 Act.

  • I have also attended meetings both with SWBT and several CLECs interconnecting with or negotiating to interconnect with SWBT, or reviewed notes of such meetings.

  • Additionally, I have reviewed performance measures proposed by other BOCs, such as the attached Ameritech proposal, in various proceedings in other states.

  • Finally, in reviewing SWBT's proposals, I have drawn upon my significant experience with quality performance measures. As a telephone company line manager and officer, my performance was judged in part by how well I met customer service objectives. Further, as a staff manager, I had responsibility for the development and implementation of quality performance measures.





    III.
    PERFORMANCE MEASURES AND THEIR ROLE

  • The 1996 Act obligates incumbent local exchange carriers (ILEC's), and thus Bell Operating Companies (BOCs), to provide requesting carriers with, among other things, interconnection,

  • access to unbundled network elements, and resale services. In fulfilling these obligations BOCs will perform a variety of functions for competitors, many of which BOCs also perform in providing retail services. Some of these functions, however, will be new.

  • The ability to detect discrimination in the performance of these functions is dependent on the establishment of performance measures, allowing competitors and regulators to measure the BOC's performance. The development of appropriate measures is critical to establishing that the local market is open. On an ongoing basis, the measures must be able to assure that the local market remains open and that any BOC backsliding will be detected.

  • Performance measures serve as criteria for indicating the performance of such wholesale functions. Performance measures enable competitors and regulators to compare a BOC's performance of a function with that provided a BOC's retail customer, or make an assessment of such function in the abstract. For example, to measure how well a BOC performs the function of provisioning resold local service, we can define a performance measure -- "the percentage of orders not completed within three days" -- and use it to describe the BOC's performance and to compare it to the BOC's retail performance of the same function. In general, performance measures are used to determine quality, measuring how long an activity takes to complete -- cycle time -- and how well the activity is performed -- reliability.

  • A performance measure may take the form of an objective or target, such as the example cycle-time measure "three days to complete an order" above, where the result is a percentage of orders meeting or not meeting the target. A performance measure can also be a raw time interval, such as the average number of days to complete resale orders. In neither case, however, does the outcome of the measure -- the percentage or cycle time -- itself indicate "good" performance or "bad" performance. Thus, performance measures themselves are not the barometers of performance, but rather the yardsticks with which to measure such performance. Accordingly, my review is limited to the adequacy of SWBT's performance measures, rather than the adequacy of its performance.

  • The highest priority performance measures should be those that describe the end-to-end quality of service -- cycle time and reliability -- from the customer's viewpoint. Studies over the years have identified performance measures that correlate highly with the customer's perceptions of service quality, such as the percentage of repeat reports of trouble, while others have a lower correlation.

  • While performance measures are generally easy to identify, there is no universally accepted definition of what the measure proposes to reveal nor specifically how to gather the necessary data that comprises the measure. For example, cycle-time performance measures are dependent on the specific definition of start and stop times, while reliability measures are dependent on

  • the specific definition of what constitutes a failure. This affidavit does not attempt to specify these definitions. However, it is critical that SWBT and interconnecting CLECs do so. To further ensure the usefulness of the results, I have assumed that all parties will commit to reporting results that reflect the spirit of the performance measure as well as its paper definition. For example, in measuring the level of missed appointments, the result should be measured against the original due date; due date changes could only be considered when explicitly requested by the end user.

  • My review of SWBT's proposed performance measures includes an assessment of:


    A.     BOC PERFORMANCE MEASURES TO DATE

  • Over the past 120 years, telephone companies have developed extensive measures of customer service. These performance measures have generally served two purposes: first, to allow for the comparison of performance between managers, territories, organizations, and companies; and second, to provide regulators

  • with indicators of potential problems. These measures cover all areas of customer-affecting performance, including customer care, provisioning, repair, billing, and network maintenance. Regulatory requirements notwithstanding, these performance measures comprise a key indicator of management success. Objectives are set, data is gathered, reports are published, and results become part of the corporate, organizational, and individual success determination.

  • Using performance measures, most state public utility commissions require achievement of certain levels of performance for customer service. For example, the Oklahoma Corporation Commission requires the following:

  • The FCC requires the BOCs, GTE, and Sprint to submit quality of service data that is summarized annually in a report entitled "Quality of Service for the Local Operating Companies Aggregated to the Holding Company Level." Without specifying particular levels, the report includes the following performance measures:


    B.    PARITY VERSUS ADEQUACY PERFORMANCE MEASURES TO DATE

  • Given the dual retail and wholesale roles BOCs must now play under the 1996 Act, there are two approaches to measuring the performance of a particular function: parity performance measurements and adequacy performance measurements. When a BOC's performance of certain functions for its retail units or "end user" customers is identical or analogous to the performance of those functions for competitors or their customers, parity performance measures apply. Parity performance measures merely juxtapose performance results, such as trouble reports per month per customer placed by the BOC's customers compared with those of a competitor's customers. Thus parity performance measures are used for "apples to apples" comparisons, and are most often applied in the resale environment, where the functions a BOC performs for a competitor's customers are almost identical to those performed for its own retail customers.

  • In contrast, adequacy performance measures establish an objective or target pertaining to functions a BOC either (1) performs only for competitors, or (2) performs for competitors in

  • a manner sufficiently different from that performed for the BOC itself such that a comparison is meaningless or unhelpful. Thus adequacy performance measures apply in "apples-to-oranges" comparisons. An example might be the average time to provision an unbundled loop.


    C.    BOC WHOLESALE FUNCTIONS

  • It is helpful to divide the functions BOCs will perform for CLECs under the 1996 Act into five primary categories: preordering, ordering, provisioning, maintenance and repair, and billing functions. These categories describe the functions through which CLECs acquire new customers and subsequently maintain facilities for them and bill them. Within each category, performance measures identify the cycle-time and reliability of each function. Performance parity is achieved if CLEC customers enjoy cycle time and reliability of functions equivalent to that experienced by the BOC's customers or its affiliates' customers.

  • Pre-ordering describes the up-front process of a CLEC or BOC customer service representative obtaining information to place an order for new, additional, or changed service. Pre-order cycle time performance measures generally refer to operations support system (OSS) response times that allow the representative to complete the service order with the customer on the line (e.g. customer address verification or appointment scheduling). Pre-order reliability performance measures refer to the accuracy and completeness of the data received. These pre-ordering functions are generally visible to the end user.

  • Ordering describes the process of the service representative transmitting the service order into the BOC's OSSs for facility assignment, data base updates (including 911, directory listing, and repair), switch updates, and dispatch of a technician if required. For a CLEC, this includes successfully moving the service order across an agreed-upon interface into the BOC's OSSs. Ordering cycle time performance measures refer to BOC response times for notices of order confirmation, jeopardy, or rejection. Ordering reliability performance measures refer to the accuracy and completeness of these notices. Ordering is generally transparent to end users.

  • Ordering performance measures also relate to the measurement of service order "flow-through." Flow-through measures the percentage of service orders that flow from the service representative to completion if no technician dispatch is required or to the point of dispatch if dispatch is required.

  • OSS availability and BOC service center answer time performance measures can also be considered to be part of the ordering process.

  • Provisioning involves the execution of a request for a set of products and services or unbundled network elements with attendant acknowledgments and status reports. Provisioning performance measures measure how quickly and how well customer service orders are completed. Provisioning results are highly visible to end users and are critical to the determination of performance parity. Provisioning cycle time performance measures refer to measuring the interval, from the end user's perspective, from order placement to order completion. Example measures include average POTS completion interval and percent missed appointments. Provisioning reliability performance measures refer to the accuracy of the work (i.e., did the end user receive what they ordered) and to the quality of the work done (i.e., did everything work). An example measure is the percentage of new service failures within an agreed upon time.

  • For purposes of this review, I have evaluated categories of repair and maintenance separately. Repair is the process by which end users report a case of trouble and the trouble is subsequently cared for. This process is highly visible to the end user and has a high correlation with the end user's perception of the service provider. Repair cycle time performance measures measure the interval from end user report to trouble clearance and notification. Examples include mean time to repair and percent missed appointments. Repair reliability performance measures measure the quality of the repair operation. An example is the percentage of trouble recurring within an agreed upon time.

  • Maintenance refers to how well the network itself is maintained and associated performance measures generally refer to reliability rather than cycle time. The most visible performance measure is the mean time between troubles, often referred to as the trouble report rate. Other performance measures measure how well the BOC's switching and transmission elements are maintained. Examples include percent dial tone delay, percent switches with unscheduled downtime, and transmission signal to noise ratio.

  • Billing performance measures measure the speed, accuracy, and completeness of end user usage data from the BOC to the CLEC. While the process may be transparent to the end user, the end product is highly visible. Examples of performance measures include the percentage of billing records delivered on time and the percentage of accurate and complete bills. 50. There are several miscellaneous functions that must also be measured. For example, toll and directory assistance operator services and directory listing must be included as performance parity categories. Typical performance measures include operator speed of answer and directory listing accuracy.


    D.    MARKET AND PRODUCT DISAGGREGATION OF PARITY PERFORMANCE MEASURES

  • As discussed above, meaningful determinations of parity performance require "apples-to-apples" comparisons of the functions performed by a BOC. Where, however, the same function is performed, for example, by different personnel, with different facilities, for different customer classes, or for different products, more refined comparisons are required. Thus, for example, the function of installing POTS service for consumer and business customers may be identical, but because business customers may be more sensitive to installation delays, a meaningful comparison may require juxtiposition of only business customer installation intervals.

  • There are two general categories of such further disaggregation. First, market parity refers to equality between appropriate customer groups. Customer groups may be broken out geographically or by class of service. Geographic market parity means comparing CLEC results to BOC results within the geography the CLEC has chosen to offer service. For example, if a CLEC offers resale service only in city A, a meaningful comparison may require the BOC to provide their retail results only for city A.

  • Class of service market parity means comparing CLEC results to BOC results within the classes of service the CLEC has chosen to offer. For example, if a CLEC offers service to small business end users only, for purposes of comparison a BOC would have to provide its retail results for such small business users.

  • A second category of disaggregation is product parity. Product parity can be divided into wholesale and retail product types. The first breakout is by the type of wholesale product: resale services, unbundled network elements, or facilities-based interconnection. Resale performance measures are generally parity measurements, while unbundled element and facilities-based interconnection performance measures are generally adequacy measurements. The second breakout is by the retail product or service offered to the end user: POTS, Hicap, Subrate, ISDN, Centrex, etc. For example, if a CLEC chooses to offer ISDN, the BOC would provide performance measurements for their own ISDN retail product.


    E.    REPORTING REQUIREMENTS

  • Once appropriate performance measures have been agreed to and the data gathered, the results must be formatted into reports and provided to CLECs and regulators. My review will include proposed report formats, report frequency, the appropriateness of result comparisons, report accuracy and completeness, and the availability of raw data.

  • Report format relates to how performance measure results are presented. Are they presented in table or graph form? Are they readable and understandable? Can a CLEC or regulator determine whether parity has been achieved? Report frequency relates to how often reports will be provided. Report accuracy and completeness relate to the statistical validity of the proposed data. Appropriateness of results comparisons relates to the entities for which the data will be provided. BOC retail? BOC subsidiaries? The CLEC? All CLECs? Other?





    IV.    REVIEW OF SWBT'S PROPOSED PERFORMANCE MEASURES

  • This section of the affidavit turns to the performance measures explicitly cited in SWBT's application, performance measures implied by existing interconnection agreements or comments on Section 272 service requirements, and performance measures not explicitly or implicitly included that are important to measuring functions required under the 1996 Act.


    A.   PERFORMANCE MEASURES REFERENCED IN THE APPLICATION

  • SWBT's application for provision of in-region, interLATA service in Oklahoma commits to equal quality of interconnection to new entrants. Section II.B.1. of SWBT's Brief in support of the application states, "To ensure equal quality, interconnection with CLECs will be accomplished using the same facilities, interfaces, technical criteria, and service standards as SWBT uses for its own internal operations." SWBT Brief at 19, citing Deere Aff. �. Further, with regard to resale, SWBT commits to making services available for resale that are "equal in quality, subject to the same conditions, and provided with the same provisioning time intervals as the services SWBT provides to other customers, including end users." Id. at 40, citing Kaeshoffer �.

  • SWBT's application states that their experience providing exchange access services to the long distance industry, together with "established, objective performance measures and monitoring mechanisms, make a reversal to lower quality service utterly implausible." Deere Aff. �0; Kahn Aff. �. The application goes on to identify currently filed and available regulatory reports that relate to service quality, customer satisfaction, and infrastructure investment. The application cites trunk blockage, total switch downtime, consumer satisfaction, and installation and repair intervals as examples of performance measures currently available on reports filed with the FCC. As

  • discussed above, these measures, if properly measured and reported, can be important parity determinants. FCC ARMIS reporting by itself, however, is not sufficient to judge performance parity.

  • Obviously, SWBT would need to provide separate data for retail versus wholesale performance to make a comparison. The ARMIS data filed with the FCC does not provide such a breakout. Nor does the ARMIS data cover many of the new functions BOCs will have to perform for CLECs under the 1996 Act.

  • SWBT's application also refers to sixteen negotiated interconnection agreements in Oklahoma, with six approved by the Oklahoma Corporation Commission. Zamora Aff. �. SWBT primarily relies on these agreements as providing all the performance measures necessary to gauge its performance of wholesale functions.

  • Most interconnection agreements entered into under the 1996 Act, including SWBT's agreements with CLECs, include no or few references to specific performance measures. Based on discussions with numerous CLECs, a primary reason for this appears to be the weakness of CLEC negotiating positions and a higher priority placed on entering the market versus delaying negotiations or enduring arbitrations to establish long-range safeguards such as performance measures. The CLECs reason that once in the market, they'll attempt to renegotiate the subject of performance measures, or merely rely on those established by larger carriers such as AT&T. As a result, interconnection agreements in general, and as discussed below SWBT's in particular, provide insufficient performance measures necessary to allow for a Section 271 determination of nondiscriminatory performance of wholesale functions.

  • Finally, I reviewed SWBT's Statement of Generally Available Terms and Conditions ("SGAT") filed with the Oklahoma Corporation Commission on January 15, 1997. The SGAT commits to providing new entrants with network elements, resale services, and access to OSS functions. For network elements, it also commits to performance "at least equal in quality and performance as that which SWBT provides to itself." SGAT Appendix UNE 2.14.1. Further, it provides for liquidated damages if cycle time objectives in the installation and repair of unbundled loops and the installation of interim number portability are missed. These liquidated damage provisions also appear in SWBT's executed interconnection agreements. However, these few performance measures are inadequate in both number and scope to monitor and guard against whether SWBT will have the ability to discriminate against new entrants.

    B.    PERFORMANCE MEASURES INCLUDED IN INTERCONNECTION AGREEMENTS

  • SWBT has sixteen interconnection agreements in the State of Oklahoma. These agreements commit to several explicit performance measures that are also identified in SWBT SGAT. Most commit to specific objectives for:

  • In its interconnection agreement with AT&T in Texas, SWBT commits to providing the following resale "Performance Metrics":

  • In the AT&T agreement, referring to the Performance of (Unbundled) Network Elements, SWBT commits to meeting "applicable performance measures and be at least equal in quality and performance as that which SWBT provides to itself."Texas interconnection appendix UNE 2.16.1 Oklahoma SGAT appendix 2.14.1. At AT&T's request, SWBT "will:

    1. maintain data that compares the installation intervals and maintenance/service response times experienced by AT&T's customers to those experienced by SWBT customers and the customers of other LSP's; and

    2. provide the comparative data to AT&T on a regular basis."Attachment UNE 2.16.7. These are fine commitments, but need to be defined explicitly with specific performance measures.

  • SWBT's Interconnections Agreement with Sprint in Oklahoma is the most comprehensive in terms of explicit discussion of and commitment to performance measures. In Attachment UNE 2.17.7, SWBT and Sprint "will jointly define data consistent with that provided by SWBT to other LSP's, that is to be provided monthly to Sprint to measure whether Unbundled Network Elements are

  • provided at least equal in quality and performance to that which SWBT provided to itself and other LSP's." In addition to this joint commitment, SWBT clearly commits to equal quality of service to Sprint in both Resale and Unbundled Network Element modes. Explicit performance measures committed to include: Resale: "For all resale service ordered under this agreement, SWBT will provide preorder, order, and provisioning service equal in quality and speed (speed to be measured from time SWBT receives the order from Sprint) to the services SWBT provides its end users." (Attachment Ordering and Provisioning-Resale 1.2) SWBT further commits to equal response times and priorities on trouble reports as well as equal service from SWBT technicians.

  • Unbundled Network Elements: "Each Network Element provided by SWBT to Sprint will meet applicable regulatory performance standards and be at least equal in quality and performance as that which SWBT provides to itself." (Attachment UNE 2.17.1) SWBT further commits to equal response times and priorities on trouble reports, as well as equal answer times in the repair center.

    Liquidated damages for non-performance in:

  • In its agreement with Sprint for interconnection in the state of Kansas, SWBT commits to the above performance measures related to Unbundled Loop Provisioning, Interim Number Portability Provisioning, and Out-of-Service Repairs.

  • Additionally, SWBT commits to measure order intervals for the following unbundled network elements, although many apparently are negotiated on an individual case basis (ICB): Network Interface Device (NID), Local Loop, Local Switching without Customized Routing, Operator Service/Directory Assistance, Interoffice Transport, Signaling Link Transport, SCP Databases, and Local Switching with Customized Routing. If meaningful ICB intervals are agreed upon, these can be excellent measures of product-specific performance adequacy.

    C. SWBT COMMENTS IN RESPONSE TO SECTION 272 FURTHER NOTICE OF PROPOSED RULEMAKING

  • This affidavit deals primarily with SWBT's fulfillment of requirements under Section 251 of the Act, requiring BOCs to provide wholesale inputs to carriers competing in the local exchange market.

  • However, in December 1996, the FCC, in its First Report and Order and Further Notice of Proposed Rulemaking, adopted non-accounting safeguards pursuant to Section 272 of the Act. Section 272 governs the entry by BOC's into the interLATA telecommunications services, interLATA information services, and manufacturing markets. These safeguards included a requirement that the BOCs make publicly available the intervals within which they provide services to themselves and their affiliates. The Commission proposed a standardized report format that included seven service categories as an appropriate means of making the information available. While Section 272 requirements do not

  • directly impact the requirements under Section 251, SWBT comments on those requirements may have a bearing on their ability and willingness to provide similar performance measure results for interconnection to CLECs.

  • The seven service categories proposed by the Commission in the Section 272 order are as follows:

  • SWBT proposes to update results for these seven performance measures on a monthly basis and would provide SWBT information on a corporate-wide basis. Key to determining market parity would be SWBT willingness to provide these measures more frequently on a geographic and class of service basis.

    D. PERFORMANCE MEASURES NOT INCLUDED IN SWBT'S APPLICATION

  • SWBT's assertion that they will perform wholesale functions for CLECs at least equal in quality to those performed for itself or its subsidiaries is a sound basis for meeting the requirements of the Act. However, the ability to test whether parity exists or whether discrimination is taking place is dependent on the existence of explicit and specific performance measures and the reporting of results therein for SWBT and new entrants.

  • This affidavit is not an attempt to prescribe a model set of performance measures. Nor does it attempt to lay out a minimum set of performance measures that would meet the requirements of the Act. However, it is a discussion of typical performance measures for each of the wholesale functions BOCs will perform under the 1996 Act, required to provide resale services, unbundled network elements, and facilities-based interconnection. It also discusses examples of market and product parity measurements as well as administrative reporting mechanisms. The performance measure examples discussed below are not new. Most have been tracked and reported by BOCs internally, are reported to state or federal regulatory bodies, or have been proposed as parity measures by at least one BOC.

  • Pre-ordering: Pre-ordering performance measures revolve around the ability of a CLEC service representative to complete an order with an end user on line with at least the speed and accuracy of a BOC service representative taking a similar order from a retail end user. Since CLEC service representatives will likely interface with BOC OSSs and with BOC service representatives, performance measures are needed to measure the cycle time and reliability of both interactions. These measurements will ensure that BOC service representatives do not have an unfair advantage in creating a superior end user perception of speed and efficiency. Typical pre-order performance measures not specifically proposed by SWBT in their Section 271 application.

  • BOC OSS Response Time--Measure, in seconds, the speed with which CLEC service representatives receive the following information:

    Several such measures are proposed by Ameritech. These are important in creating a customer perception of equal calling time when placing an order with a CLEC.

  • Ordering: Ordering performance measures revolve around measuring the CLEC's ability to process end user service orders into the BOC and through the BOC OSSs with speed and accuracy at least equal to the BOC itself. Ordering cycle time is primarily measured by the promptness of communications between the BOC and the CLEC and by the success of order "flow-thru." Ordering reliability is measured by the accuracy of the service order. Typical ordering performance measures not specifically proposed by SWBT in its Section 271 application include:

  • As noted above, SWBT has agreed with this measure under the requirements of Section 272, calling for a renewed measure each time an order is subsequently submitted. I don't disagree with this requirement, however an overall measure per service order would be worthwhile in meeting the spirit of the Section 251 requirements.

  • Provisioning: Provisioning Performance Measures measure how quickly and how accurately end user service orders are completed. Parity in performing provisioning functions results in CLEC customers receiving service with speed and quality at least equal to that received by BOC retail or subsidiary customers. Provisioning measures have a long and detailed history within the BOCs. They are used to review and compare manager performance, as well as required by state and federal regulatory bodies. Provisioning is a process highly visible to end users and, therefore, is a key determinant to CLEC success in the marketplace. Typical provisioning performance measures not provided by SWBT in its Section 271 Application or any existing interconnection agreements, include:

  • Maintenance: Maintenance performance measures depict two subprocesses:

    1. Trouble reporting and clearance, and

    2. Network quality. Trouble reporting performance measures describe how quickly and how well end user trouble is cared for.

  • Performance parity exists if a CLEC customer trouble is cleared with at least the same speed and quality as the BOC retail or subsidiary customer. This is a highly visible process to the end user and has significant impact on the end user's perception of the service provider. Typical maintenance performance measures not provided by SWBT in its Section 271 Application or any existing interconnection agreements, include:

  • Network Quality performance measures measure how well SWBT's network is maintained and whether SWBT's network performance discriminates against new entrants. Comparisons are between the performance distribution for SWBT retail or subsidiary customers and the performance distribution for CLEC customers. While it's not clear that this type of discrimination would be likely, network performance measures are critical to customer service and are also historically readily available. Typical network quality performance measures not provided by SWBT in its Section 271 Application or any existing interconnection agreements, include:

  • Billing: Billing performance measures measure the timeliness, accuracy, and completeness of end user billing records and wholesale bills. These are measures of performance adequacy, and are important because, once provisioned, billing is the most frequent and visible contact an end user has with the provider. Typical billing performance measures not provided by

  • SWBT in its Section 271 Application or any existing interconnection agreements, include:

  • Toll and Directory Assistance: Toll and Directory Assistance performance measures measure the speed of response to CLEC customer by BOC operators. They are measures of performance parity. Typical Toll and Directory Assistance performance measures not provided by SWBT in its Section 271 Application or any existing interconnection agreements, include:

  • Market Parity: Market parity ensures that agreed to performance measures present appropriate customer group comparisons between SWBT and CLEC's. Customer groups generally fall into two categories: Geographic and Class of Service. For example, if a CLEC offers service in only one city, appropriate performance measures would provide comparable SWBT retail results

  • for that city only. Similarly if a CLEC targets only small business customers, appropriate performance measures would provide comparative SWBT results for its small business customers only. SWBT does not explicitly discuss geographic or class of service market parity in its application.

  • Product Parity: SWBT, in its Application and negotiated interconnection agreements, does include both Resale and UNE performance measures, but has not formally agreed to this breakout. Ameritech has proposed performance measures for both Resale and UNE in its Michigan SGAT. Product parity also requires that performance measures be identified, measured, and reported for product or product families a CLEC offers to end users. Examples include POTS, Subrate data, HICAP data, Centrex, and ISDN. If a CLEC offers DS1 service to its end users as part of a UNE loop resale arrangement, SWBT would need to provide results for service provided to those customers and for its own DS1 customers. Ameritech has proposed product-based performance measures in its Michigan SGAT.

  • Reporting Requirements: SWBT makes no mention of performance measure data availability. This would allow CLEC access to SWBT partitioned results databases, in turn allowing a CLEC to pull reports themselves. Further, SWBT does not explicitly specify entities to be measured. Examples include results for a particular CLEC, all CLECs, SWBT retail, and any appropriate SWBT affiliates. In its comments on service requirements under Section 272, SWBT argues against providing 101. results for individual affiliates. Ameritech has proposed to provide results for each CLEC, all CLECs, and their own retail end users in its Michigan SGAT, but not for its own affiliates. SWBT has not specified or provided examples related to performance report frequency, accuracy, or format.




    V. CONCLUSIONS

  • SWBT's Section 271 application to provide in-region interLATA service in the state of Oklahoma includes a commitment to provide wholesale functions to new entrants at least equal in quality to that provided to its own retail end users. Further, the application proposes several specific performance measures that would allow, if properly disaggregated, a test of that commitment to parity. These proposed measures are nominally those reported to the FCC as part of ARMIS reporting requirements.

  • The application also refers to negotiated interconnection agreements as including other specific performance measures SWBT would be committed to for particular CLECs in particular markets. In Oklahoma, specified measures are UNE loop provisioning and maintenance cycle time and Interim Number Portability provisioning cycle time. Its agreement with Sprint is particularly robust with respect to performance measures. Finally in its agreement with AT&T in Texas, several Resale performance measures are also specified.

  • SWBT also agrees with a number of performance measures proposed by the Commission under Section 272 of the Act. Five out of the seven proposed measures also pertain to requirements under Section 251, implying SWBT's support for these measures.

  • While few performance measures are explicitly proposed in SWBT's Section 271 Application, many are implicitly discussed and others are identified or discussed in interconnection agreements or regulatory proceedings. It follows that SWBT could make these additional performance measures an explicit part of their 271 application. However, some performance measures needed to determine parity in SWBT's provision of wholesale products are not identified in any document or proceeding. Examples include:

  • Additionally, SWBT has not discussed providing appropriate market parity reports. They have discussed performance measure report frequency and comparison entities in their Oklahoma interconnection agreement with Sprint, but have not provided explicit examples. Product parity is implied by SWBT's separate treatment of resale and unbundled network elements, but no commitment is made to a broader recognition of different CLEC offerings.

  • Although SWBT has clearly committed to adequate and parity performance, their application should include more explicit identification of performance measures, including sample reports, that would allow competitors and regulators to judge whether adequacy and parity have been achieved for all wholesale functions. As at least a rough guide to providing such explicit identification, SWBT's Oklahoma interconnection agreement with Sprint and Ameritech's Michigan SGAT and subsequent performance measure proposals, attached, represent a good beginning.