I've been developing a spreadsheet system to track reported glitches and impossible coincidences across London, correlating them with location, time, and other variables. The idea is that if the simulation has processing limitations, certain areas might have higher glitch frequency during high-traffic times (like Piccadilly during rush hour).
I've got about 300 reported incidents collated from forums, Reddit threads, personal submissions, and some interviews I've done. Most incidents cluster around transport hubs - King's Cross, Victoria, London Bridge - but there's also a hotspot in Fitzrovia that doesn't have obvious reasons for high coincidence rates.
Before I present this data anywhere, I'm trying to figure out if my methodology is sound. Am I accounting for observer bias? (People are more likely to notice glitches in high-stress environments like busy stations.) Am I collecting data consistently? Has anyone here done similar analysis?