Learning structured predictions in semantic segmentation receives increasing attention in recent years. Most semantic segmentation methods focus on common object datasets such as VOC and COCO, which label only visible parts of each object, e.g., sections of a horse separated by objects in front of it. Domain-specific objects, on the other hand, often require whole-object segmentation despite image occlusion, e.g., roads and buildings in satellite imagery under vegetation cover, cells, and organs in noisy medical images, lanes, and signs in autonomous driving applications. The widely used cross entropy loss doesn’t work well in these cases, because its pixel-level independence assumption ignores topology and often leads to structural issues such as fragments and broken boundaries. To tackle this, we propose a simple but novel loss term that produces a much more continuous and smooth prediction for whole-object segmentation. Experiments on various tasks show that other structured approaches often perform worse than baseline for whole-object segmentation, while our loss shows significant topological improvements yet preserving the pixel-level metrics.