generalized 16-bit fixed-width math

This commit is contained in:
~d6 2022-03-13 23:23:20 -04:00
parent 80a4731d72
commit 433c6044e9
1 changed files with 145 additions and 0 deletions

145
fix16.tal Normal file
View File

@ -0,0 +1,145 @@
( fix16.tal )
( )
( use a signed 16-bit short as a fixed point number. )
( )
( numbers are interpreted as fractions with an implicit )
( 256 denominator. the upper byte is signed and )
( represents the "whole" part of the number, and the )
( lower byte is unsigned and represents the )
( "fractional" part of the number. )
( )
( 16-bit fixed point can represent fractional values )
( in the range -128 <= x < 128. the smallest fraction it )
( can represent is 1/256, which is about 0.004. )
( )
( SHORT FRACTION DECIMAL )
( #0000 0/256 0.000 )
( #0001 1/256 0.004 )
( #0002 2/256 0.008 )
( #0040 64/256 0.250 )
( #0080 128/256 0.500 )
( #0100 256/256 1.000 )
( #0700 1792/256 7.000 )
( #7f00 32512/256 127.000 )
( #7fff 32767/256 127.996 )
( #8000 -32768/256 -128.000 )
( #8001 -32767/256 -127.996 )
( #8100 -32767/256 -127.000 )
( #ff00 -256/256 -1.000 )
( #ffff -1/256 -0.004 )
( )
( many 8.8 operations are equivalent to u16: )
( * comparisons/equality )
( * addition/subtraction )
( * division )
( )
( but due to 16-bit truncation multiplication differs... )
( )
( x*y = x0*y0 + x0*y1/256 + x1*y0/256 + x1*y1/65536 )
( )
( since we only have 16-bits: )
( 1. we need to drop the 8 high bits from x0*y0 )
( 2. we need to drop the 8 low bits from x1*y1 )
( 3. we need to use all the bits from x0*y1 and x1*y0 )
( )
( that said, if either x or y is whole (i.e. ends in 00) )
( then we can just shift that argument right by 8 and use )
( MUL2 as normal. )
|1000
( useful constants )
( )
( to generate your own: )
( 1. take true value, e.g. 3.14159... )
( 2. multiply by 256 )
( 3. round to nearest whole number )
( 4. emit hex output )
( )
( in python: hex(round(x * 256)) )
%x16-zero { #0000 } ( 0.0 )
%x16-one { #0100 } ( 1.0 )
%x16-two { #0200 } ( 2.0 )
%x16-ten { #0a00 } ( 10.0 )
%x16-hundred { #6400 } ( 100.0 )
%x16-minus-one { #7f00 } ( -1.0 )
%x16-minus-two { #7e00 } ( -2.0 )
%x16-pi/2 { #0192 } ( 1.57079... )
%x16-pi { #0324 } ( 3.14159... )
%x16-pi*2 { #0648 } ( 6.28318... )
%x16-e { #02b8 } ( 2.71828... )
%x16-phi { #019e } ( 1.61803... )
%x16-sqrt-2 { #016a } ( 1.41421... )
%x16-sqrt-3 { #01bb } ( 1.73205... )
%x16-epsilon { #0001 } ( 0.00390625 )
%x16-minimum { #8000 } ( -128.0 )
%x16-maximum { #7fff } ( 127.99609375 )
( useful macros )
%x16-is-non-neg { x16-minimum LTH2 }
%x16-is-neg { x16-maximum GTH2 }
( comparison between x and y. )
( - ff: x < y )
( - 00: x = y )
( - 01: x > y )
@x16-cmp ( x* y* -> c^ )
STH2k x16-is-neg ,&yn JMP
x16-is-non-neg ,&ypxp ( y>=0 )
POP2 POP2r #ff JMP2r ( x<0 y>=0 )
&ypxp ;x16-ucmp JMP2 ( x>=0 y>=0 )
&yn x16-is-neg ,&ynxn ( y<0 )
POP2 POP2r #01 JMP2r ( x>=0 y<0 )
&ynxn SWP2 ;x16-ucmp JMP2 ( x<0 y<0 )
( unsigned comparison between x and y. )
( - ff: x < y )
( - 00: x = y )
( - 01: x > y )
@x16-ucmp ( x* y* -> c^ )
LTH2k ,&lt JCN GTH2 JMP2r
&lt POP2 POP2 #ff JMP2r
@x16-eq ( x* y* -> x=y ) EQU2 JMP2r
@x16-ne ( x* y* -> x!=0 ) NEQ2 JMP2r
@x16-lt ( x* y* -> x<y^ ) ;x16-cmp JSR2 #ff EQU JMP2r
@x16-lteq ( x* y* -> x<y^ ) ;x16-cmp JSR2 #01 NEQ JMP2r
@x16-gt ( x* y* -> x<y^ ) ;x16-cmp JSR2 #01 EQU JMP2r
@x16-gteq ( x* y* -> x<y^ ) ;x16-cmp JSR2 #ff NEQ JMP2r
@x16-is-whole ( x* -> bool^ )
NIP #00 EQU JMP2r
@x16-add ( x* y* -> x+y* )
ADD2 JMP2r
@x16-sub ( x* y* -> x-y* )
SUB2 JMP2r
@x16-negate ( x* -> -x* )
#0000 SWP2 SUB2 JMP2r
@x16-mul ( x* y* -> xy* )
DUP #00 EQU ,&rhs-whole JCN
SWP2 DUP #00 EQU ,&rhs-whole JCN
,&y3 STR ,&y1 STR ,&x3 STR ,&x1 STR
LIT2 &x2 00 &x3 00 LIT2 &y2 00 &y3 00 MUL2 #08 SFT2
LIT2 &x0 00 &x1 00 ,&y2 LDR2 MUL2 ADD2
,&x2 LDR2 LIT2 &y0 00 &y1 00 MUL2 ADD2
,&x0 LDR2 ,&x0 LDR2 MUL2 #80 SFT2 ADD2 JMP2r
&rhs-whole #08 SFT2 MUL2 JMP2r
@x16-div ( x* y* -> x/y* )
SWP2 DUP2 x16-is-non-neg ,&non-negative ( y x )
;x16-negate JSR2 SWP2 DIV2 ;x16-negate JMP2
&non-negative
SWP2 DIV2 JMP2r
@x16-mod ( x* y* -> x%y* )
;x16-div JSR2 ;x16-mul JSR2 SUB2 JMP2r
@x16-mod-div ( x* y* -> x%y* x/y* )
;x16-div JSR2 STH2k ;x16-mul JSR2 SUB2 STH2r JMP2r
@x16-div-mod ( x* y* -> x/y* x%y* )
;x16-mod-div JSR2 SWP2 JMP2r