Automated testing enables development teams to maintain the quality of information systems while they evolve. Nevertheless, the dual role of database manipulation code implies that it often suffers from software maintenance problems. In this paper, we study the current state-of-the-practice in testing database manipulation code, through two empirical investigations. As a motivational study, we first analyse 72 real-life Java projects mined from Libraries.io to get an impression of the test coverage for database code. By analysing the coverage of their test code, we show that the database manipulation classes are poorly tested: 46% of the projects did not cover with tests half of their database access methods, and 33% of the projects did not cover the database code at all. To further understand the difficulties in testing database code, we then qualitatively analyse 532 questions on popular discussion forums (StackOverflow, CodeReview and SoftwareEngineering) and we deduce a taxonomy. We discover that information system developers mostly look for insights on general best practices to test database access code. They also ask more technical questions related to DB handling, mocking, parallelisation or framework/tool usage. Our exploratory investigations lay the basis for future research on automatically testing database manipulation in information systems.