The premise in the question is incorrect. There were such chips. The question also fails to allow for the way that the silicon-chip industry developed.
Moore's Law basically said that every 18 months, it was possible to build chips with twice as many transistors for the same amount of money.
The 6502 (1975) is a mid-1970s design. In the '70s it cost a lot to use even thousands of transistors; the 6502 succeeded partly because it was very small and simple and didn't use many, compared to more complex rivals such as the Z80 and 6809.
The 68000 (1979) was also from the same decade. It became affordable in the early 1980s (e.g. Apple Lisa) and slightly more so by 1984 (Apple Macintosh). However, note that Motorola also offered a version with an 8-bit external bus, the 68008, as used in the Sinclair QL. This reduced performance, but it was worth it for cheaper machines because it was so expensive to have a 16-bit chipset and 16-bit memory.
Note that just 4 years separates the 6502 and 68000. That's how much progress was being made then.
The 65C816 was a (partially) 16-bit successor to the 6502. Note that WDC also designed a 32-bit successor, the 65C832. Here is a datasheet: https://downloads.reactivemicro.com/Electronics/CPU/WDC%2065C832%20Datasheet.pdf
However, this was never produced. As a 16-bit extension to an 8-bit design, the 65C816 was compromised and slower than pure 16-bit designs. A 32-bit design would have been even more compromised.
Note, this is also why Acorn succeeded with the ARM processor: its clean 32-bit-only design was more efficient than Motorola's combination 16/32-bit design, which was partly inspired by the DEC PDP-11 minicomputer. Acorn evaluated the 68000, 65C816 (which it used in the rare Acorn Communicator), NatSemi 32016, Intel 80186 and other chips and found them wanting. Part of the brilliance of the Acorn design was that it effectively used slow DRAM and did not need elaborate caching or expensive high-speed RAM, resulting in affordable home computers that were nearly 10x faster than rival 68000 machines.
The 68000 was 16-bit externally but 32-bit internally: that is why the Atari machine that used it was called the ST, short for "sixteen/thirty-two".
The first fully-32 bit 680x0 chip was the 68020 (1984). It was faster but did not offer a lot of new capabilities, and its successor the 68030 was more successful, partly because it integrated a memory management unit. Compare with the Intel 80386DX (1985), which did much the same: 32-bit bus, integral MMU.
The 80386DX struggled in the market because of the expense of making 32-bit motherboards with 32-bit wide RAM, so was succeeded by the 80386SX (1988), the same 32-bit core but with a half-width (16-bit) external bus. This is the same design principle as the 68008.
Motorola's equivalent was the fairly rare 68EC020.
The reason was that around the end of the 1980s, when these devices came out, 16MB of memory was a huge amount and very expensive. There was no need for mass-market chips to address 4GB of RAM — that would have cost hundreds of thousands of £/$ at the time. Their 32-bit cores were for performance, not capacity.
The 68030 was followed by the 68040 (1990), just as the 80386 was followed by the 80486 (1989). Both also integrated floating-point coprocessors into the main CPU die. The progress of Moore's Law had now made this affordable.
The line ended with the 68060 (1994), but still 32-bit — but again like Intel's 80586 family, now called "Pentium" because they could't trademark numbers — both have Level 1 cache on the CPU die.
The reason was because at this time, fabricating large chips with millions of transistors was still expensive, and these chips could still address more RAM than was remotely affordable to fit into a personal computer.
So the priority at the time was to find way to spend a limited transistor budget on making faster chips: 8-bit → 16-bit → 32-bit → integrate MMU → integrate FPU → integrate L1 cache
This line of development somewhat ran out of steam by the mid-1990s. This is why there was no successor to the 68060.
Most of the industry switched to the path Acorn had started a decade earlier: dispensing with backwards compatibility with now-compromised 1970s designs and starting afresh with a stripped-down, simpler, reduced design — Reduced Instruction Set Computing (RISC).
ARM chips supported several OSes: RISC OS, Unix, Psion EPOC (later renamed Symbian), Apple NewtonOS, etc. Motorola's supported more: LisaOS, classic MacOS, Xenix, ST TOS, AmigaDOS, multiple Unixes, etc.
No single one was dominant.
Intel was constrained by the success of Microsoft's MS-DOS/Windows family, which sold far more than all the other x86 OSes put together. So backwards-compatibility was more important for Intel than for Acorn or Motorola.
Intel had tried several other CPU architectures: iAPX-432, i860, i960 and later Itanium. All failed in the general-purpose market.
Thus, Intel was forced to to find a way to make x86 quicker. It did this by breaking down x86 instructions into RISC-like "micro operations", re-sequencing them for faster execution, running them on a RISC-like core, and then reassembling the results into x86 afterwards. First on the Pentium Pro, which only did this efficiently for x86-32 instructions, when many people were still running Windows 95/98, an OS composed of a lot of x86-16 code and which ran a lot of x86-16 apps.
Then with the Pentium II, an improved Pentium Pro with onboard L1 (and soon after L2) cache and improved x86-16 optimisation — but also around the time that the PC market moved to Windows XP, a fully x86-32 OS.
In other words, even by the turn of the century, the software was still moving to 32-bit and the limits of 32-bit operation (chiefly, 4GB RAM) were still largely theoretical. So, the effort went into making faster chips with the existing transistor budget.
Only by the middle of the first decade of the 21st century did 4GB become a bottleneck, leading to the conditions for AMD to create a 64-bit extension to x86.
The reasons that 64-bit happened did not apply in the 1990s.
From the 1970s to about 2005, 32 bits were more than enough, and CPU makers worked on spending the transistor budgets on integrating more go-faster parts into CPUs. Eventually, this strategy ran out, when CPUs included the integer core, a floating-point core, a memory management unit, a tiny amount of L1 cache and a larger amount of slower L2 cache.
Then, there was only 1 way to go: integrate a second CPU onto the chip. Firstly as a separate CPU die, then as dual-core dies. Luckily, by this time, NT had replaced Win9x, and NT and Unix could both support symmetrical multiprocessing.
So, dual-core chips, then quadruple-core chips. After that, a single user on a desktop or laptop gets little more benefit. There are many CPUs with more cores but they are almost exclusively used in servers.
Secondly, the CPU industry was now reaching limits of how fast silicon chips can run, and how much heat they emit when doing so. The megahertz race ended.
So the emphases changed, to two new ones, as the limiting factors became:
These last two things are two sides of the same coin, which is why I said two not three.
Koomey's Law has replaced Moore's Law.