Camera calibration methods, whether implicit or explicit, are a critical part of most 3D vision systems. These methods involve estimation of a model for the camera that produced the visual input, and subsequently to infer the 3D structure that gave rise to the input. However, in these systems the error in calibration is typically unknown, or if known, the effect of calibration error on subsequent processing (e.g. 3D reconstruction) is not accounted for. In this paper, we propose a Bayesian camera calibration method that explicitly computes calibration error, and we show how knowledge of this error can be used to improve the accuracy of subsequent processing. What distinguishes the work is the explicit computation of a posterior distribution on unknown camera parameters, rather than just a best estimate. Marginalizing (averaging) subsequent estimates by this posterior is shown to reduce reconstruction error over calibration approaches that rely on a single best estimate. The method is made practical using sampling techniques, that require only the evaluation of the calibration error function and the specification of priors. Samples with their corresponding probability weights can be used to produce better estimates of the camera parameters. Moreover, these samples can be directly used to improve estimates that rely on calibration information, like 3D reconstruction. We evaluate our method using simulated data for a structure from motion problem, in which the same point matches are used to calibrate the camera, estimate the motion, and reconstruct the 3D geometry. Our results show improved reconstruction over non-linear Camera calibration methods like the Maximum Likelihood estimate. Additionally, this approach scales much better in the face of increasingly noisy point matches.