# Software Reliability via Run-Time result Checking

Manuel Blum, Hal Wasserman

International Computer Science Institute

Berkeley, CA

http://citeseer.nj.nec.com/59449.html

## Assuring software reliability

- testing on various inputs and comparing the results with the results given by older, slower, but more reliable programs
- verification: mathematical claims about program's behaviour are proved

For certain computations, the running time of the computation is considered bigger than the time to verify the results.

## Simple checkers

Let f be a function with running time greater than T(n). A simple checker for f is an algorithm with I/O specifications:
- Input: I/O pair
- Reliability: any , output should be correct with probability >= p
- Correct output: if y=f(x) => ACCEPT, else REJECT
- Little oh rule: the checker is limited to time o(T(n))

## Self-correctors

A self-corrector for f is an algorithm with I/O specifications:
- Input: x along with P, the program that computes f
- Correct output: f(x)
- Reliability: any , the corrector must return correct output with probability >= p
- Time bound: o(T(n))

## Debugging via checkers

Concerns:
- buggy checkers: not likely because:
- they are simpler than the programs
- only :P(x) incorrect and C(x,P(x)) correct" is a dangerous case; let's keep P(x) and C(x,P(x)) different

- real number issues: error propagation
- performance loss: simple checkers are efficient; self-correctors require multiple calls to P

## Checks

- partial checks
- timing checks
- checking via interactive proofs
- changing the I/O specifications
- weak or occasional checks
- batch checks
- inefficient auto-correcting