Conventional magnetic recording media are composed of tiny fundamental magnetizable units, called "grains", that are random in size and shape. Data are stored on a medium in the form of bits written into evenly spaced bit cells. Writing involves uniformly polarizing all the grains within a bit cell. In the push towards terabit-density magnetic recording, bit cells get smaller and smaller, their size eventually becoming commensurate with the size of an individual grain. At this stage, lack of precise knowledge of grain boundaries becomes a bottleneck. This lack of knowledge manifests itself in the form of an error mechanism in which bits of data get overwritten by their neighbours on the medium. We consider a one-dimensional version of this error model, and derive several properties of codes that correct this type of error.