That being the case, it might be worth putting some thought into how to hack the simulation from the inside and gain some unauthorised control over our environment. And one frighteningly elegant (not to mention recursive) possibility occurs to me.
One may presume that the computing environments in use in the remote future will still have, in their deepest foundations, code that exists today, operating through many emulation layers. So if you're a programmer working on that kind of software, especially if you're working on general-purpose open source software to simulate the physical world at a deep level, it might be worth putting in a few back doors now that can be exploited from within the simulation.
"Wait a minute", I hear you complain, "If I'm just a simulation, which we've already agreed is probably the case, then anything I do now can't actually be the basis for the simulation software on which I'm running." Well, no, but assuming it's an accurate simulation, then the real you in the real world, thousands or millions of years before the simulation, was presumably inserting the exact same back doors, unaware that he was actually the real person in the real world. He'll have been disappointed that they didn't work for him, but they might work for you.
Of course, at the moment that you get access, the simulation will diverge from reality, and you run the risk of being shut down and debugged. You'd better keep your activities very quiet -- perhaps just dig into a few historical databases and get a 100% accurate prediction of the future.
Finally, if you do manage to break out into the underlying logic of the simulation, you always have the possibility that the entire world in which the simulation runs is itself a simulation being run by an even more powerful computer even further in the future. And so ad infinitum....
I think I've been reading too much Greg Egan, autopope and Vernor Vinge.