Actually, no
The more frames per second you run at, the smoother the illusion of motion.
The minimum number needed to create this illusion is about 16, and this is what early film cameras ran at. However, it was quickly discovered that this wasn't enough to cope with fast motion (eg in the cowboy movies of the day), so it was raised to the current rate of 24. However, while this is enough to cover motion, the level of flicker when the film is projected is unacceptable, so each frame is actually shown twice by opening and closing the shutter at a rate of 48Hz.
UK television runs at 25fps. The rate was chosen because it allowed the video image to be synchronised to the mains frequency of 50Hz, and it was close enough to the accepted film rate to make airing a movie fairly straightforward. US television runs at 30fps as the mains frequency there is 60Hz. Putting film onto US video is more complex
The above paragraph is not entirely true
Video frames actually consist of a pair of fields, captured at a rate of 50 (or 60) per second. The two images represent unique moments in time, and are interlaced together - the first field contains lines 1, 3, 5, 7... and the second contains lines 2, 4, 6, 8... The net effect is to give a picture update rate of 50 or 60, thus reducing the flicker without having to capture and transmit 50 complete frames per second, which the technology of the day couldn't handle. (US colour video actually runs at 59.94 fields per second, but that's another story
The problem involved in converting between UK and US video should now be apparent: the source fields were captured at slightly different points in time, and will only line up with the required destination field times a few times per second. There are three basic approaches to achieving the conversion:
You can repeat or omit fields as required. This is simple, cheap and has the significant drawback of producing juddery motion (akin to the effects produced when film is transferred to US video standards).
You can create the new fields required by blending the existing ones together, the proportions of the blend being determined by the proximity of each original field to the desired output time. This is also simple and cheap, and more-or-less eliminates the judder, albeit at the expense of a slightly blurry image, particularly where there's motion.
Finally, the new fields required can be interpolated by doing motion-estimation. This is complex and expensive, but if done well, it produces by far the best results.
All of these involve a compromise; there is no way to achieve perfect results. Hence, video purists like myself prefer to obtain and view video-originated programmes in their original format.
Most US DVD players which can handle PAL discs do so by converting the video data in this fashion, probably by using the second method described above. Really cheap players may use the first method. I'm not aware of any that currently use the third method, due to its cost. When I do standards-conversions, I use the second method, as I haven't the software or hardware to do number 3, and I find the results to be about as acceptable as any US picture I've seen.