1#! /usr/bin/env perl 2# Copyright 2010-2016 The OpenSSL Project Authors. All Rights Reserved. 3# 4# Licensed under the OpenSSL license (the "License"). You may not use 5# this file except in compliance with the License. You can obtain a copy 6# in the file LICENSE in the source distribution or at 7# https://www.openssl.org/source/license.html 8 9# 10# ==================================================================== 11# Written by Andy Polyakov <appro@openssl.org> for the OpenSSL 12# project. The module is, however, dual licensed under OpenSSL and 13# CRYPTOGAMS licenses depending on where you obtain it. For further 14# details see http://www.openssl.org/~appro/cryptogams/. 15# ==================================================================== 16# 17# March, May, June 2010 18# 19# The module implements "4-bit" GCM GHASH function and underlying 20# single multiplication operation in GF(2^128). "4-bit" means that it 21# uses 256 bytes per-key table [+64/128 bytes fixed table]. It has two 22# code paths: vanilla x86 and vanilla SSE. Former will be executed on 23# 486 and Pentium, latter on all others. SSE GHASH features so called 24# "528B" variant of "4-bit" method utilizing additional 256+16 bytes 25# of per-key storage [+512 bytes shared table]. Performance results 26# are for streamed GHASH subroutine and are expressed in cycles per 27# processed byte, less is better: 28# 29# gcc 2.95.3(*) SSE assembler x86 assembler 30# 31# Pentium 105/111(**) - 50 32# PIII 68 /75 12.2 24 33# P4 125/125 17.8 84(***) 34# Opteron 66 /70 10.1 30 35# Core2 54 /67 8.4 18 36# Atom 105/105 16.8 53 37# VIA Nano 69 /71 13.0 27 38# 39# (*) gcc 3.4.x was observed to generate few percent slower code, 40# which is one of reasons why 2.95.3 results were chosen, 41# another reason is lack of 3.4.x results for older CPUs; 42# comparison with SSE results is not completely fair, because C 43# results are for vanilla "256B" implementation, while 44# assembler results are for "528B";-) 45# (**) second number is result for code compiled with -fPIC flag, 46# which is actually more relevant, because assembler code is 47# position-independent; 48# (***) see comment in non-MMX routine for further details; 49# 50# To summarize, it's >2-5 times faster than gcc-generated code. To 51# anchor it to something else SHA1 assembler processes one byte in 52# ~7 cycles on contemporary x86 cores. As for choice of MMX/SSE 53# in particular, see comment at the end of the file... 54 55# May 2010 56# 57# Add PCLMULQDQ version performing at 2.10 cycles per processed byte. 58# The question is how close is it to theoretical limit? The pclmulqdq 59# instruction latency appears to be 14 cycles and there can't be more 60# than 2 of them executing at any given time. This means that single 61# Karatsuba multiplication would take 28 cycles *plus* few cycles for 62# pre- and post-processing. Then multiplication has to be followed by 63# modulo-reduction. Given that aggregated reduction method [see 64# "Carry-less Multiplication and Its Usage for Computing the GCM Mode" 65# white paper by Intel] allows you to perform reduction only once in 66# a while we can assume that asymptotic performance can be estimated 67# as (28+Tmod/Naggr)/16, where Tmod is time to perform reduction 68# and Naggr is the aggregation factor. 69# 70# Before we proceed to this implementation let's have closer look at 71# the best-performing code suggested by Intel in their white paper. 72# By tracing inter-register dependencies Tmod is estimated as ~19 73# cycles and Naggr chosen by Intel is 4, resulting in 2.05 cycles per 74# processed byte. As implied, this is quite optimistic estimate, 75# because it does not account for Karatsuba pre- and post-processing, 76# which for a single multiplication is ~5 cycles. Unfortunately Intel 77# does not provide performance data for GHASH alone. But benchmarking 78# AES_GCM_encrypt ripped out of Fig. 15 of the white paper with aadt 79# alone resulted in 2.46 cycles per byte of out 16KB buffer. Note that 80# the result accounts even for pre-computing of degrees of the hash 81# key H, but its portion is negligible at 16KB buffer size. 82# 83# Moving on to the implementation in question. Tmod is estimated as 84# ~13 cycles and Naggr is 2, giving asymptotic performance of ... 85# 2.16. How is it possible that measured performance is better than 86# optimistic theoretical estimate? There is one thing Intel failed 87# to recognize. By serializing GHASH with CTR in same subroutine 88# former's performance is really limited to above (Tmul + Tmod/Naggr) 89# equation. But if GHASH procedure is detached, the modulo-reduction 90# can be interleaved with Naggr-1 multiplications at instruction level 91# and under ideal conditions even disappear from the equation. So that 92# optimistic theoretical estimate for this implementation is ... 93# 28/16=1.75, and not 2.16. Well, it's probably way too optimistic, 94# at least for such small Naggr. I'd argue that (28+Tproc/Naggr), 95# where Tproc is time required for Karatsuba pre- and post-processing, 96# is more realistic estimate. In this case it gives ... 1.91 cycles. 97# Or in other words, depending on how well we can interleave reduction 98# and one of the two multiplications the performance should be between 99# 1.91 and 2.16. As already mentioned, this implementation processes 100# one byte out of 8KB buffer in 2.10 cycles, while x86_64 counterpart 101# - in 2.02. x86_64 performance is better, because larger register 102# bank allows to interleave reduction and multiplication better. 103# 104# Does it make sense to increase Naggr? To start with it's virtually 105# impossible in 32-bit mode, because of limited register bank 106# capacity. Otherwise improvement has to be weighed against slower 107# setup, as well as code size and complexity increase. As even 108# optimistic estimate doesn't promise 30% performance improvement, 109# there are currently no plans to increase Naggr. 110# 111# Special thanks to David Woodhouse for providing access to a 112# Westmere-based system on behalf of Intel Open Source Technology Centre. 113 114# January 2010 115# 116# Tweaked to optimize transitions between integer and FP operations 117# on same XMM register, PCLMULQDQ subroutine was measured to process 118# one byte in 2.07 cycles on Sandy Bridge, and in 2.12 - on Westmere. 119# The minor regression on Westmere is outweighed by ~15% improvement 120# on Sandy Bridge. Strangely enough attempt to modify 64-bit code in 121# similar manner resulted in almost 20% degradation on Sandy Bridge, 122# where original 64-bit code processes one byte in 1.95 cycles. 123 124##################################################################### 125# For reference, AMD Bulldozer processes one byte in 1.98 cycles in 126# 32-bit mode and 1.89 in 64-bit. 127 128# February 2013 129# 130# Overhaul: aggregate Karatsuba post-processing, improve ILP in 131# reduction_alg9. Resulting performance is 1.96 cycles per byte on 132# Westmere, 1.95 - on Sandy/Ivy Bridge, 1.76 - on Bulldozer. 133 134# This file was patched in BoringSSL to remove the variable-time 4-bit 135# implementation. 136 137$0 =~ m/(.*[\/\\])[^\/\\]+$/; $dir=$1; 138push(@INC,"${dir}","${dir}../../../perlasm"); 139require "x86asm.pl"; 140 141$output=pop; 142open STDOUT,">$output"; 143 144&asm_init($ARGV[0],$x86only = $ARGV[$#ARGV] eq "386"); 145 146$sse2=0; 147for (@ARGV) { $sse2=1 if (/-DOPENSSL_IA32_SSE2/); } 148 149if (!$x86only) {{{ 150if ($sse2) {{ 151###################################################################### 152# PCLMULQDQ version. 153 154$Xip="eax"; 155$Htbl="edx"; 156$const="ecx"; 157$inp="esi"; 158$len="ebx"; 159 160($Xi,$Xhi)=("xmm0","xmm1"); $Hkey="xmm2"; 161($T1,$T2,$T3)=("xmm3","xmm4","xmm5"); 162($Xn,$Xhn)=("xmm6","xmm7"); 163 164&static_label("bswap"); 165 166sub clmul64x64_T2 { # minimal "register" pressure 167my ($Xhi,$Xi,$Hkey,$HK)=@_; 168 169 &movdqa ($Xhi,$Xi); # 170 &pshufd ($T1,$Xi,0b01001110); 171 &pshufd ($T2,$Hkey,0b01001110) if (!defined($HK)); 172 &pxor ($T1,$Xi); # 173 &pxor ($T2,$Hkey) if (!defined($HK)); 174 $HK=$T2 if (!defined($HK)); 175 176 &pclmulqdq ($Xi,$Hkey,0x00); ####### 177 &pclmulqdq ($Xhi,$Hkey,0x11); ####### 178 &pclmulqdq ($T1,$HK,0x00); ####### 179 &xorps ($T1,$Xi); # 180 &xorps ($T1,$Xhi); # 181 182 &movdqa ($T2,$T1); # 183 &psrldq ($T1,8); 184 &pslldq ($T2,8); # 185 &pxor ($Xhi,$T1); 186 &pxor ($Xi,$T2); # 187} 188 189sub clmul64x64_T3 { 190# Even though this subroutine offers visually better ILP, it 191# was empirically found to be a tad slower than above version. 192# At least in gcm_ghash_clmul context. But it's just as well, 193# because loop modulo-scheduling is possible only thanks to 194# minimized "register" pressure... 195my ($Xhi,$Xi,$Hkey)=@_; 196 197 &movdqa ($T1,$Xi); # 198 &movdqa ($Xhi,$Xi); 199 &pclmulqdq ($Xi,$Hkey,0x00); ####### 200 &pclmulqdq ($Xhi,$Hkey,0x11); ####### 201 &pshufd ($T2,$T1,0b01001110); # 202 &pshufd ($T3,$Hkey,0b01001110); 203 &pxor ($T2,$T1); # 204 &pxor ($T3,$Hkey); 205 &pclmulqdq ($T2,$T3,0x00); ####### 206 &pxor ($T2,$Xi); # 207 &pxor ($T2,$Xhi); # 208 209 &movdqa ($T3,$T2); # 210 &psrldq ($T2,8); 211 &pslldq ($T3,8); # 212 &pxor ($Xhi,$T2); 213 &pxor ($Xi,$T3); # 214} 215 216if (1) { # Algorithm 9 with <<1 twist. 217 # Reduction is shorter and uses only two 218 # temporary registers, which makes it better 219 # candidate for interleaving with 64x64 220 # multiplication. Pre-modulo-scheduled loop 221 # was found to be ~20% faster than Algorithm 5 222 # below. Algorithm 9 was therefore chosen for 223 # further optimization... 224 225sub reduction_alg9 { # 17/11 times faster than Intel version 226my ($Xhi,$Xi) = @_; 227 228 # 1st phase 229 &movdqa ($T2,$Xi); # 230 &movdqa ($T1,$Xi); 231 &psllq ($Xi,5); 232 &pxor ($T1,$Xi); # 233 &psllq ($Xi,1); 234 &pxor ($Xi,$T1); # 235 &psllq ($Xi,57); # 236 &movdqa ($T1,$Xi); # 237 &pslldq ($Xi,8); 238 &psrldq ($T1,8); # 239 &pxor ($Xi,$T2); 240 &pxor ($Xhi,$T1); # 241 242 # 2nd phase 243 &movdqa ($T2,$Xi); 244 &psrlq ($Xi,1); 245 &pxor ($Xhi,$T2); # 246 &pxor ($T2,$Xi); 247 &psrlq ($Xi,5); 248 &pxor ($Xi,$T2); # 249 &psrlq ($Xi,1); # 250 &pxor ($Xi,$Xhi) # 251} 252 253&function_begin_B("gcm_init_clmul"); 254 &mov ($Htbl,&wparam(0)); 255 &mov ($Xip,&wparam(1)); 256 257 &call (&label("pic")); 258&set_label("pic"); 259 &blindpop ($const); 260 &lea ($const,&DWP(&label("bswap")."-".&label("pic"),$const)); 261 262 &movdqu ($Hkey,&QWP(0,$Xip)); 263 &pshufd ($Hkey,$Hkey,0b01001110);# dword swap 264 265 # <<1 twist 266 &pshufd ($T2,$Hkey,0b11111111); # broadcast uppermost dword 267 &movdqa ($T1,$Hkey); 268 &psllq ($Hkey,1); 269 &pxor ($T3,$T3); # 270 &psrlq ($T1,63); 271 &pcmpgtd ($T3,$T2); # broadcast carry bit 272 &pslldq ($T1,8); 273 &por ($Hkey,$T1); # H<<=1 274 275 # magic reduction 276 &pand ($T3,&QWP(16,$const)); # 0x1c2_polynomial 277 &pxor ($Hkey,$T3); # if(carry) H^=0x1c2_polynomial 278 279 # calculate H^2 280 &movdqa ($Xi,$Hkey); 281 &clmul64x64_T2 ($Xhi,$Xi,$Hkey); 282 &reduction_alg9 ($Xhi,$Xi); 283 284 &pshufd ($T1,$Hkey,0b01001110); 285 &pshufd ($T2,$Xi,0b01001110); 286 &pxor ($T1,$Hkey); # Karatsuba pre-processing 287 &movdqu (&QWP(0,$Htbl),$Hkey); # save H 288 &pxor ($T2,$Xi); # Karatsuba pre-processing 289 &movdqu (&QWP(16,$Htbl),$Xi); # save H^2 290 &palignr ($T2,$T1,8); # low part is H.lo^H.hi 291 &movdqu (&QWP(32,$Htbl),$T2); # save Karatsuba "salt" 292 293 &ret (); 294&function_end_B("gcm_init_clmul"); 295 296&function_begin_B("gcm_gmult_clmul"); 297 &mov ($Xip,&wparam(0)); 298 &mov ($Htbl,&wparam(1)); 299 300 &call (&label("pic")); 301&set_label("pic"); 302 &blindpop ($const); 303 &lea ($const,&DWP(&label("bswap")."-".&label("pic"),$const)); 304 305 &movdqu ($Xi,&QWP(0,$Xip)); 306 &movdqa ($T3,&QWP(0,$const)); 307 &movups ($Hkey,&QWP(0,$Htbl)); 308 &pshufb ($Xi,$T3); 309 &movups ($T2,&QWP(32,$Htbl)); 310 311 &clmul64x64_T2 ($Xhi,$Xi,$Hkey,$T2); 312 &reduction_alg9 ($Xhi,$Xi); 313 314 &pshufb ($Xi,$T3); 315 &movdqu (&QWP(0,$Xip),$Xi); 316 317 &ret (); 318&function_end_B("gcm_gmult_clmul"); 319 320&function_begin("gcm_ghash_clmul"); 321 &mov ($Xip,&wparam(0)); 322 &mov ($Htbl,&wparam(1)); 323 &mov ($inp,&wparam(2)); 324 &mov ($len,&wparam(3)); 325 326 &call (&label("pic")); 327&set_label("pic"); 328 &blindpop ($const); 329 &lea ($const,&DWP(&label("bswap")."-".&label("pic"),$const)); 330 331 &movdqu ($Xi,&QWP(0,$Xip)); 332 &movdqa ($T3,&QWP(0,$const)); 333 &movdqu ($Hkey,&QWP(0,$Htbl)); 334 &pshufb ($Xi,$T3); 335 336 &sub ($len,0x10); 337 &jz (&label("odd_tail")); 338 339 ####### 340 # Xi+2 =[H*(Ii+1 + Xi+1)] mod P = 341 # [(H*Ii+1) + (H*Xi+1)] mod P = 342 # [(H*Ii+1) + H^2*(Ii+Xi)] mod P 343 # 344 &movdqu ($T1,&QWP(0,$inp)); # Ii 345 &movdqu ($Xn,&QWP(16,$inp)); # Ii+1 346 &pshufb ($T1,$T3); 347 &pshufb ($Xn,$T3); 348 &movdqu ($T3,&QWP(32,$Htbl)); 349 &pxor ($Xi,$T1); # Ii+Xi 350 351 &pshufd ($T1,$Xn,0b01001110); # H*Ii+1 352 &movdqa ($Xhn,$Xn); 353 &pxor ($T1,$Xn); # 354 &lea ($inp,&DWP(32,$inp)); # i+=2 355 356 &pclmulqdq ($Xn,$Hkey,0x00); ####### 357 &pclmulqdq ($Xhn,$Hkey,0x11); ####### 358 &pclmulqdq ($T1,$T3,0x00); ####### 359 &movups ($Hkey,&QWP(16,$Htbl)); # load H^2 360 &nop (); 361 362 &sub ($len,0x20); 363 &jbe (&label("even_tail")); 364 &jmp (&label("mod_loop")); 365 366&set_label("mod_loop",32); 367 &pshufd ($T2,$Xi,0b01001110); # H^2*(Ii+Xi) 368 &movdqa ($Xhi,$Xi); 369 &pxor ($T2,$Xi); # 370 &nop (); 371 372 &pclmulqdq ($Xi,$Hkey,0x00); ####### 373 &pclmulqdq ($Xhi,$Hkey,0x11); ####### 374 &pclmulqdq ($T2,$T3,0x10); ####### 375 &movups ($Hkey,&QWP(0,$Htbl)); # load H 376 377 &xorps ($Xi,$Xn); # (H*Ii+1) + H^2*(Ii+Xi) 378 &movdqa ($T3,&QWP(0,$const)); 379 &xorps ($Xhi,$Xhn); 380 &movdqu ($Xhn,&QWP(0,$inp)); # Ii 381 &pxor ($T1,$Xi); # aggregated Karatsuba post-processing 382 &movdqu ($Xn,&QWP(16,$inp)); # Ii+1 383 &pxor ($T1,$Xhi); # 384 385 &pshufb ($Xhn,$T3); 386 &pxor ($T2,$T1); # 387 388 &movdqa ($T1,$T2); # 389 &psrldq ($T2,8); 390 &pslldq ($T1,8); # 391 &pxor ($Xhi,$T2); 392 &pxor ($Xi,$T1); # 393 &pshufb ($Xn,$T3); 394 &pxor ($Xhi,$Xhn); # "Ii+Xi", consume early 395 396 &movdqa ($Xhn,$Xn); #&clmul64x64_TX ($Xhn,$Xn,$Hkey); H*Ii+1 397 &movdqa ($T2,$Xi); #&reduction_alg9($Xhi,$Xi); 1st phase 398 &movdqa ($T1,$Xi); 399 &psllq ($Xi,5); 400 &pxor ($T1,$Xi); # 401 &psllq ($Xi,1); 402 &pxor ($Xi,$T1); # 403 &pclmulqdq ($Xn,$Hkey,0x00); ####### 404 &movups ($T3,&QWP(32,$Htbl)); 405 &psllq ($Xi,57); # 406 &movdqa ($T1,$Xi); # 407 &pslldq ($Xi,8); 408 &psrldq ($T1,8); # 409 &pxor ($Xi,$T2); 410 &pxor ($Xhi,$T1); # 411 &pshufd ($T1,$Xhn,0b01001110); 412 &movdqa ($T2,$Xi); # 2nd phase 413 &psrlq ($Xi,1); 414 &pxor ($T1,$Xhn); 415 &pxor ($Xhi,$T2); # 416 &pclmulqdq ($Xhn,$Hkey,0x11); ####### 417 &movups ($Hkey,&QWP(16,$Htbl)); # load H^2 418 &pxor ($T2,$Xi); 419 &psrlq ($Xi,5); 420 &pxor ($Xi,$T2); # 421 &psrlq ($Xi,1); # 422 &pxor ($Xi,$Xhi) # 423 &pclmulqdq ($T1,$T3,0x00); ####### 424 425 &lea ($inp,&DWP(32,$inp)); 426 &sub ($len,0x20); 427 &ja (&label("mod_loop")); 428 429&set_label("even_tail"); 430 &pshufd ($T2,$Xi,0b01001110); # H^2*(Ii+Xi) 431 &movdqa ($Xhi,$Xi); 432 &pxor ($T2,$Xi); # 433 434 &pclmulqdq ($Xi,$Hkey,0x00); ####### 435 &pclmulqdq ($Xhi,$Hkey,0x11); ####### 436 &pclmulqdq ($T2,$T3,0x10); ####### 437 &movdqa ($T3,&QWP(0,$const)); 438 439 &xorps ($Xi,$Xn); # (H*Ii+1) + H^2*(Ii+Xi) 440 &xorps ($Xhi,$Xhn); 441 &pxor ($T1,$Xi); # aggregated Karatsuba post-processing 442 &pxor ($T1,$Xhi); # 443 444 &pxor ($T2,$T1); # 445 446 &movdqa ($T1,$T2); # 447 &psrldq ($T2,8); 448 &pslldq ($T1,8); # 449 &pxor ($Xhi,$T2); 450 &pxor ($Xi,$T1); # 451 452 &reduction_alg9 ($Xhi,$Xi); 453 454 &test ($len,$len); 455 &jnz (&label("done")); 456 457 &movups ($Hkey,&QWP(0,$Htbl)); # load H 458&set_label("odd_tail"); 459 &movdqu ($T1,&QWP(0,$inp)); # Ii 460 &pshufb ($T1,$T3); 461 &pxor ($Xi,$T1); # Ii+Xi 462 463 &clmul64x64_T2 ($Xhi,$Xi,$Hkey); # H*(Ii+Xi) 464 &reduction_alg9 ($Xhi,$Xi); 465 466&set_label("done"); 467 &pshufb ($Xi,$T3); 468 &movdqu (&QWP(0,$Xip),$Xi); 469&function_end("gcm_ghash_clmul"); 470 471} else { # Algorithm 5. Kept for reference purposes. 472 473sub reduction_alg5 { # 19/16 times faster than Intel version 474my ($Xhi,$Xi)=@_; 475 476 # <<1 477 &movdqa ($T1,$Xi); # 478 &movdqa ($T2,$Xhi); 479 &pslld ($Xi,1); 480 &pslld ($Xhi,1); # 481 &psrld ($T1,31); 482 &psrld ($T2,31); # 483 &movdqa ($T3,$T1); 484 &pslldq ($T1,4); 485 &psrldq ($T3,12); # 486 &pslldq ($T2,4); 487 &por ($Xhi,$T3); # 488 &por ($Xi,$T1); 489 &por ($Xhi,$T2); # 490 491 # 1st phase 492 &movdqa ($T1,$Xi); 493 &movdqa ($T2,$Xi); 494 &movdqa ($T3,$Xi); # 495 &pslld ($T1,31); 496 &pslld ($T2,30); 497 &pslld ($Xi,25); # 498 &pxor ($T1,$T2); 499 &pxor ($T1,$Xi); # 500 &movdqa ($T2,$T1); # 501 &pslldq ($T1,12); 502 &psrldq ($T2,4); # 503 &pxor ($T3,$T1); 504 505 # 2nd phase 506 &pxor ($Xhi,$T3); # 507 &movdqa ($Xi,$T3); 508 &movdqa ($T1,$T3); 509 &psrld ($Xi,1); # 510 &psrld ($T1,2); 511 &psrld ($T3,7); # 512 &pxor ($Xi,$T1); 513 &pxor ($Xhi,$T2); 514 &pxor ($Xi,$T3); # 515 &pxor ($Xi,$Xhi); # 516} 517 518&function_begin_B("gcm_init_clmul"); 519 &mov ($Htbl,&wparam(0)); 520 &mov ($Xip,&wparam(1)); 521 522 &call (&label("pic")); 523&set_label("pic"); 524 &blindpop ($const); 525 &lea ($const,&DWP(&label("bswap")."-".&label("pic"),$const)); 526 527 &movdqu ($Hkey,&QWP(0,$Xip)); 528 &pshufd ($Hkey,$Hkey,0b01001110);# dword swap 529 530 # calculate H^2 531 &movdqa ($Xi,$Hkey); 532 &clmul64x64_T3 ($Xhi,$Xi,$Hkey); 533 &reduction_alg5 ($Xhi,$Xi); 534 535 &movdqu (&QWP(0,$Htbl),$Hkey); # save H 536 &movdqu (&QWP(16,$Htbl),$Xi); # save H^2 537 538 &ret (); 539&function_end_B("gcm_init_clmul"); 540 541&function_begin_B("gcm_gmult_clmul"); 542 &mov ($Xip,&wparam(0)); 543 &mov ($Htbl,&wparam(1)); 544 545 &call (&label("pic")); 546&set_label("pic"); 547 &blindpop ($const); 548 &lea ($const,&DWP(&label("bswap")."-".&label("pic"),$const)); 549 550 &movdqu ($Xi,&QWP(0,$Xip)); 551 &movdqa ($Xn,&QWP(0,$const)); 552 &movdqu ($Hkey,&QWP(0,$Htbl)); 553 &pshufb ($Xi,$Xn); 554 555 &clmul64x64_T3 ($Xhi,$Xi,$Hkey); 556 &reduction_alg5 ($Xhi,$Xi); 557 558 &pshufb ($Xi,$Xn); 559 &movdqu (&QWP(0,$Xip),$Xi); 560 561 &ret (); 562&function_end_B("gcm_gmult_clmul"); 563 564&function_begin("gcm_ghash_clmul"); 565 &mov ($Xip,&wparam(0)); 566 &mov ($Htbl,&wparam(1)); 567 &mov ($inp,&wparam(2)); 568 &mov ($len,&wparam(3)); 569 570 &call (&label("pic")); 571&set_label("pic"); 572 &blindpop ($const); 573 &lea ($const,&DWP(&label("bswap")."-".&label("pic"),$const)); 574 575 &movdqu ($Xi,&QWP(0,$Xip)); 576 &movdqa ($T3,&QWP(0,$const)); 577 &movdqu ($Hkey,&QWP(0,$Htbl)); 578 &pshufb ($Xi,$T3); 579 580 &sub ($len,0x10); 581 &jz (&label("odd_tail")); 582 583 ####### 584 # Xi+2 =[H*(Ii+1 + Xi+1)] mod P = 585 # [(H*Ii+1) + (H*Xi+1)] mod P = 586 # [(H*Ii+1) + H^2*(Ii+Xi)] mod P 587 # 588 &movdqu ($T1,&QWP(0,$inp)); # Ii 589 &movdqu ($Xn,&QWP(16,$inp)); # Ii+1 590 &pshufb ($T1,$T3); 591 &pshufb ($Xn,$T3); 592 &pxor ($Xi,$T1); # Ii+Xi 593 594 &clmul64x64_T3 ($Xhn,$Xn,$Hkey); # H*Ii+1 595 &movdqu ($Hkey,&QWP(16,$Htbl)); # load H^2 596 597 &sub ($len,0x20); 598 &lea ($inp,&DWP(32,$inp)); # i+=2 599 &jbe (&label("even_tail")); 600 601&set_label("mod_loop"); 602 &clmul64x64_T3 ($Xhi,$Xi,$Hkey); # H^2*(Ii+Xi) 603 &movdqu ($Hkey,&QWP(0,$Htbl)); # load H 604 605 &pxor ($Xi,$Xn); # (H*Ii+1) + H^2*(Ii+Xi) 606 &pxor ($Xhi,$Xhn); 607 608 &reduction_alg5 ($Xhi,$Xi); 609 610 ####### 611 &movdqa ($T3,&QWP(0,$const)); 612 &movdqu ($T1,&QWP(0,$inp)); # Ii 613 &movdqu ($Xn,&QWP(16,$inp)); # Ii+1 614 &pshufb ($T1,$T3); 615 &pshufb ($Xn,$T3); 616 &pxor ($Xi,$T1); # Ii+Xi 617 618 &clmul64x64_T3 ($Xhn,$Xn,$Hkey); # H*Ii+1 619 &movdqu ($Hkey,&QWP(16,$Htbl)); # load H^2 620 621 &sub ($len,0x20); 622 &lea ($inp,&DWP(32,$inp)); 623 &ja (&label("mod_loop")); 624 625&set_label("even_tail"); 626 &clmul64x64_T3 ($Xhi,$Xi,$Hkey); # H^2*(Ii+Xi) 627 628 &pxor ($Xi,$Xn); # (H*Ii+1) + H^2*(Ii+Xi) 629 &pxor ($Xhi,$Xhn); 630 631 &reduction_alg5 ($Xhi,$Xi); 632 633 &movdqa ($T3,&QWP(0,$const)); 634 &test ($len,$len); 635 &jnz (&label("done")); 636 637 &movdqu ($Hkey,&QWP(0,$Htbl)); # load H 638&set_label("odd_tail"); 639 &movdqu ($T1,&QWP(0,$inp)); # Ii 640 &pshufb ($T1,$T3); 641 &pxor ($Xi,$T1); # Ii+Xi 642 643 &clmul64x64_T3 ($Xhi,$Xi,$Hkey); # H*(Ii+Xi) 644 &reduction_alg5 ($Xhi,$Xi); 645 646 &movdqa ($T3,&QWP(0,$const)); 647&set_label("done"); 648 &pshufb ($Xi,$T3); 649 &movdqu (&QWP(0,$Xip),$Xi); 650&function_end("gcm_ghash_clmul"); 651 652} 653 654&set_label("bswap",64); 655 &data_byte(15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0); 656 &data_byte(1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0xc2); # 0x1c2_polynomial 657}} # $sse2 658}}} # !$x86only 659 660&asciz("GHASH for x86, CRYPTOGAMS by <appro\@openssl.org>"); 661&asm_finish(); 662 663close STDOUT or die "error closing STDOUT"; 664 665# A question was risen about choice of vanilla MMX. Or rather why wasn't 666# SSE2 chosen instead? In addition to the fact that MMX runs on legacy 667# CPUs such as PIII, "4-bit" MMX version was observed to provide better 668# performance than *corresponding* SSE2 one even on contemporary CPUs. 669# SSE2 results were provided by Peter-Michael Hager. He maintains SSE2 670# implementation featuring full range of lookup-table sizes, but with 671# per-invocation lookup table setup. Latter means that table size is 672# chosen depending on how much data is to be hashed in every given call, 673# more data - larger table. Best reported result for Core2 is ~4 cycles 674# per processed byte out of 64KB block. This number accounts even for 675# 64KB table setup overhead. As discussed in gcm128.c we choose to be 676# more conservative in respect to lookup table sizes, but how do the 677# results compare? Minimalistic "256B" MMX version delivers ~11 cycles 678# on same platform. As also discussed in gcm128.c, next in line "8-bit 679# Shoup's" or "4KB" method should deliver twice the performance of 680# "256B" one, in other words not worse than ~6 cycles per byte. It 681# should be also be noted that in SSE2 case improvement can be "super- 682# linear," i.e. more than twice, mostly because >>8 maps to single 683# instruction on SSE2 register. This is unlike "4-bit" case when >>4 684# maps to same amount of instructions in both MMX and SSE2 cases. 685# Bottom line is that switch to SSE2 is considered to be justifiable 686# only in case we choose to implement "8-bit" method... 687