Having the multiple M.2 slots means you can have two drives attached at the same time and each can independently deliver full x4 performance--just not at the same time. So the only time bandwidth would be limited would be if you tried to use both drives simultaneously such as in RAID0. The chipset having the ability to act as a switch chip is a large advantage--you can conveniently have many devices attached at the same time, and why not? It's something Intel includes with the chipset so they have paid for it already, and may as well load up their boards with more slots.
I should point out that some high-end boards use a separate, 3rd party switch chip such as IDT, PLX or Microsemi to further split this x4 connection into even more lanes so you can have the flexibility of even more simultaneously connected devices. This kind of makes PCIe more like the shared bus architecture like PCI was (instead of point-to-point), and just like with PCI and ethernet hubs vs. switches, bandwidth must be shared.
That's better than things used to be--for example with Z97 boards and their secondary x16 slots (that were electrically x4), Gigabyte wired things so installing any card in that slot would disable all the x1 slots. So you could either use that x16 slot or the x1 slots, not both. ASUS on the other hand, wired that 2nd x16 slot as x2 so that both their x1 slots would continue to work, but of course that meant the card in the x16 slot would have its usual bandwidth cut in half even if you weren't using the x1.