| | Re: Relibility issues
Weighted Kappa is the appropriate analysis for ordinal data. But there is no such thing of intra-rater reliability. There are two main types of reliability which relevant to experimental study. First, Inter-rater reliability means whether the results from both of you and another rater on the same subject are consistent among each other at one time. Second, test-retest reliability means the same subject is measured from one time to another time to see if the results are consistent at different times. I think you are talking about inter-rater reliability, then you need to find another rater to compare the same sample at the same time for you.