Caveat: this is a little esoteric for the forum, so stop reading now, unless you're interested in random probability concepts. I'm a professor who teaches statistics and Blinkadyblink is on the right track. The reason people think the dice aren't random is tied to a fundamental misunderstanding of the term when it is tied to statistical probability. We use random in everyday language to mean options that are different each time, but fairly equally distributed (one day I wear a brown shirt, then a blue shirt, but never the same shirt two days in a row).
But the term random in statistical probability just means that you cannot predict the next number in a series based on knowledge of the last number. We call this a random walk, like a drunk walking down the street, you cannot predict where they will step next (or if they'll throw up on their shoes).
In a random walk you get all kinds of outcomes that are patterns, for example, 5 6s in a row, or 7 1s in a row. The classic classroom experiment is flipping a coin and recording heads and tails, you frequently end up with 5, 6, or more heads or tails in a row (for a demonstration see:
http://shazam.econ.ubc.ca/flip/).
A long pattern of the same outcomes always surprises people. "How is that random? I thought it was a 50/50 chance?" Because, the definition of random in probability outcomes only means you cannot predict the outcome, not that patterns won't emerge, and they always do. The larger the sample, the greater the likelihood of an overall statistical average matching the actual odds (1/6 for each dice outcome), but small samples can have any combination of patterns (5 6s or 7 1s).
In fact, 14 losses in a row are part of a random walk pattern. Even with this knowledge, I cannot predict your next outcome series, maybe 3 wins in a row. But we do remember bad outcomes far more readily that good outcomes.