I added 100 000 rows to a the table in database (localhost) and since then I get this error:
java.lang.OutOfMemoryError: GC overhead limit exceeded
I resolved the problem by typing in consol:
javaw -XX:-UseConcMarkSweepGC
And the consol the output is (see code below for context) :
2015-08-02T02:57:22.779+0200|Info: 5
2015-08-02T02:57:22.779+0200|Info: end, time taken: 82755
It takes 82 seconds to extract one row in the database(see code at the end). It was working fine when I had less rows so I'm wondering:
- Why would it take so much time to extract 1 row ? JPA can't possibly extract every row in an object ? Or does it ? Just wow.
- Is there a way around this ? I mean extracting a single row in 80 seconds is borderline slow.
- Do I really have to type the command
-XX:-UseConcMarkSweepGC
? What does it do ? From the doc :
Use concurrent mark-sweep collection for the old generation. (Introduced in 1.4.1)
Here is how my code is:
@EJB
private ThreadLookUpInterface ts;
@Schedule(hour = "*", minute = "*/1", second = "0", persistent = false)
@Override
public void makeTopThreadList() {
System.out.println("" + ts.getThread(5).getIdthread());
}
my service ejb like this:
@Stateless
public class ThreadLookUpService implements ThreadLookUpInterface {
@PersistenceContext(unitName = "my-pu")
private EntityManager em;
private static final String FIND_THREAD_BY_ID = "SELECT t FROM Thethread t WHERE t.idthread=:id";
@Override
public Thethread getThread(int threadId) {
Query query = em.createQuery(FIND_THREAD_BY_ID);
query.setParameter("id", threadId);
try {
Thethread thread = (Thethread) query.getSingleResult();
return thread;
} catch (NoResultException e) {
return null;
} catch (Exception e) {
throw new DAOException(e);
}
}
}
Aucun commentaire:
Enregistrer un commentaire