Yeah, it's complicated but good complicated. Not the kind of buzzwords that may sound complicated to impress you like a typical shitcoin try to flying novadays. Hate to repost but below is a chart of what is going on in the ETH development. A bit outdated since ETH PoS is going to Merge soon (ETA ~12 more hours).
And please don't hate on my boi Vitalik. Dude is like coder on steroids (or drug lol). Here, an interesting piece from Vitalik site talks about sharding and how to improve it against 51% attack. Warning, a long read. I tried to snip a few paragraphs to encourage anyone to read out the link below.
Sharding through Random SamplingThe easiest version of sharding to understand is sharding through random sampling. Sharding through random sampling has weaker trust properties than the forms of sharding that we are building towards in the Ethereum ecosystem, but it uses simpler technology.
The core idea is as follows. Suppose that you have a proof of stake chain with a large number (eg. 10000) validators, and you have a large number (eg. 100) blocks that need to be verified. No single computer is powerful enough to validate all of these blocks before the next set of blocks comes in.
Hence, what we do is we randomly split up the work of doing the verification. We randomly shuffle the validator list, and we assign the first 100 validators in the shuffled list to verify the first block, the second 100 validators in the shuffled list to verify the second block, etc. A randomly selected group of validators that gets assigned to verify a block (or perform some other task) is called a committee.

When a validator verifies a block, they publish a signature attesting to the fact that they did so. Everyone else, instead of verifying 100 entire blocks, now only verifies 10000 signatures - a much smaller amount of work, especially with BLS signature aggregation. Instead of every block being broadcasted through the same P2P network, each block is broadcasted on a different sub-network, and nodes need only join the subnets corresponding to the blocks that they are responsible for (or are interested in for other reasons).
Consider what happens if each node's computing power increases by 2x. Because each node can now safely validate 2x more signatures, you could cut the minimum staking deposit size to support 2x more validators, and so hence you can make 200 committees instead of 100. Hence, you can verify 200 blocks per slot instead of 100. Furthermore, each individual block could be 2x bigger. Hence, you have 2x more blocks of 2x the size, or 4x more chain capacity altogether.
...
Recap: how are we ensuring everything is correct again?Suppose that you have 100 blocks and you want to efficiently verify correctness for all of them without relying on committees. We need to do the following:
+Each client performs data availability sampling on each block, verifying that the data in each block is available, while downloading only a few kilobytes per block even if the block as a whole is a megabyte or larger in size. A client only accepts a block when all data of their availability challenges have been correctly responded to.
+Now that we have verified data availability, it becomes easier to verify correctness. There are two techniques:
_We can use fraud proofs: a few participants with staked deposits can sign off on each block's correctness. Other nodes, called challengers (or fishermen) randomly check and attempt to fully process blocks. Because we already checked data availability, it will always be possible to download the data and fully process any particular block. If they find an invalid block, they post a challenge that everyone verifies. If the block turns out to be bad, then that block and all future blocks that depend on that need to be re-computed.
_We can use ZK-SNARKs. Each block would come with a ZK-SNARK proving correctness.
+In either of the above cases, each client only needs to do a small amount of verification work per block, no matter how big the block is. In the case of fraud proofs, occasionally blocks will need to be fully verified on-chain, but this should be extremely rare because triggering even one challenge is very expensive.
And that's all there is to it! In the case of Ethereum sharding, the near-term plan is to make sharded blocks data-only; that is, the shards are purely a "data availability engine", and it's the job of layer-2 rollups to use that secure data space, plus either fraud proofs or ZK-SNARKs, to implement high-throughput secure transaction processing capabilities. However, it's completely possible to create such a built-in system to add "native" high-throughput execution.