If your Java list has more clones than a sci fi convention then yes you need to find duplicates. This guide walks through reliable ways to detect repeated elements in a List while being annoyingly honest about trade offs in memory and speed. We cover HashMap counting, the two Set trick, and a tidy Streams version so you can pick your level of elegance or paranoia.
Because eyeballing is linear in confidence and quadratic in regret. Naive nested loops are O(n squared) and fine for tiny lists or dramatic code golf. For anything nontrivial use linear time methods that scale like a responsible adult.
Use a HashMap to count how many times each element appears. You get full frequency info which is nice when you care about counts not just presence.
Map<T, Integer> counts = new HashMap<>();
for (T item : list) {
counts.put(item, counts.getOrDefault(item, 0) + 1);
}
List<T> duplicates = counts.entrySet().stream()
.filter(e -> e.getValue() > 1)
.map(Map.Entry::getKey)
.collect(Collectors.toList());
Performance notes: time is O(n) and memory grows with unique elements. Use this when counts matter or when duplicates may appear many times and you need the exact number.
This is compact and commonly the best first attempt. One Set records values you have seen. If add fails then the value is a duplicate and you stash it in a second Set.
Set<T> seen = new HashSet<>();
Set<T> dupes = new HashSet<>();
for (T item : list) {
if (!seen.add(item)) {
dupes.add(item);
}
}
// dupes now contains unique values that appeared more than once
Memory wise this often uses slightly less overhead than a full map when you only care about which values repeat. It also naturally yields unique duplicates without extra cleanup.
If you prefer fluent style and shorter code the Streams API groups and filters nicely. Expect a bit of runtime overhead compared to tight loops but enjoy the readable pipeline.
List<T> duplicates = list.stream()
.collect(Collectors.groupingBy(Function.identity(), Collectors.counting()))
.entrySet().stream()
.filter(e -> e.getValue() > 1)
.map(Map.Entry::getKey)
.collect(Collectors.toList());
This gives the same information as the map approach but in fewer lines. Use it when maintainability matters more than raw micro performance.
Nulls are valid list elements in Java. HashMap and HashSet handle null keys so decide whether null counts as a duplicate in your business logic. For mutable list elements use immutable keys or a stable key extractor to avoid surprise behavior when objects change hash codes mid flight.
Pick Map when you need counts, pick two Sets when you just want which values repeat, and pick Streams when code readability wins and you do not need extreme micro performance. If you are still unsure run a quick benchmark on realistic input and let the numbers do the yelling for you.
Now go find those duplicates and pretend you enjoyed it.
I know how you can get Azure Certified, Google Cloud Certified and AWS Certified. It's a cool certification exam simulator site called certificationexams.pro. Check it out, and tell them Cameron sent ya!
This is a dedicated watch page for a single video.