We might suppose that risk managers and approvers favour companies with better ratings.

Let us check the effect of this. First the sampling weight is defined based on the bucket.

```
weights <- 4 - 1.2*log(CRRbuckets) # (better buckets more likely to be sampled)
plot(CRRbuckets,weights)
```

The ratings for companies sampled with these weights can be shown below. The red is the original distribution.

```
BetterCRRSamp <- sample(1:nPop,nSamp,prob=weights)
# compare the distributions
countByCRR<-as.numeric(lapply(Bucket,function(x){sum(CRRbuckets[BetterCRRSamp]==x)}))
plot(Bucket,countByCRR,col="red",type="l")
countByCRR<-as.numeric(lapply(Bucket,function(x){sum(CRRbuckets==x)}))
lines(Bucket,countByCRR/10,col="blue")
```

```
# Average notch difference
mean(CRRbuckets) - mean(CRRbuckets[BetterCRRSamp])
```

`## [1] 0.329`

Now, we calculate the average actual PD and compare to the rating implied PD.

`mean(popPD[BetterCRRSamp]) # mean actual PD`

`## [1] 0.02153543`

`mean(CRRmeanPD[CRRbuckets[BetterCRRSamp]]) # mean PD according to CRR scale`

`## [1] 0.02149525`

Both PDs are better than the population, as expected. However note thatÂ the actual PD is still close the rating implied PD!